summaryrefslogtreecommitdiff
path: root/gemfeed
diff options
context:
space:
mode:
authorPaul Buetow <paul@buetow.org>2025-04-04 23:24:14 +0300
committerPaul Buetow <paul@buetow.org>2025-04-04 23:24:14 +0300
commitb0eeedb0c620563930c0cfca26ee6b6c78c2e2f5 (patch)
tree1c6dc5df7da455ae094cb6c5214c4c82382c8814 /gemfeed
parent07920d17ce75e84100842fb6b367bd2e5233635a (diff)
Update content for gemtext
Diffstat (limited to 'gemfeed')
-rw-r--r--gemfeed/atom.xml4
-rw-r--r--gemfeed/helix-gpt.log184
2 files changed, 187 insertions, 1 deletions
diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml
index 0a678298..c0aeafce 100644
--- a/gemfeed/atom.xml
+++ b/gemfeed/atom.xml
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
- <updated>2025-04-04T23:21:02+03:00</updated>
+ <updated>2025-04-04T23:23:08+03:00</updated>
<title>foo.zone feed</title>
<subtitle>To be in the .zone!</subtitle>
<link href="gemini://foo.zone/gemfeed/atom.xml" rel="self" />
@@ -20,6 +20,8 @@
<div xmlns="http://www.w3.org/1999/xhtml">
<h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-4-rocky-linux-bhyve-vms'>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</h1><br />
<br />
+<span class='quote'>Published at 2025-04-04T23:21:01+03:00</span><br />
+<br />
<span>This is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
diff --git a/gemfeed/helix-gpt.log b/gemfeed/helix-gpt.log
index 8808371e..07f77c15 100644
--- a/gemfeed/helix-gpt.log
+++ b/gemfeed/helix-gpt.log
@@ -12328,3 +12328,187 @@ APP 2025-04-04T20:20:33.592Z --> sent request | {"jsonrpc":"2.0","method":"textD
APP 2025-04-04T20:20:43.108Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+APP 2025-04-04T20:22:12.532Z --> received didChange | language: markdown | contentVersion: 2350 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:12.613Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":6,"line":250},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":490}
+
+APP 2025-04-04T20:22:12.742Z --> received didChange | language: markdown | contentVersion: 2351 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:12.814Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, w edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2351}
+
+APP 2025-04-04T20:22:12.814Z --> skipping because content is stale
+
+APP 2025-04-04T20:22:12.814Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:12.814Z --> sent request | {"jsonrpc":"2.0","id":490,"result":{"isIncomplete":false,"items":[]}}
+
+APP 2025-04-04T20:22:12.911Z --> received didChange | language: markdown | contentVersion: 2352 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:13.163Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":8,"line":250},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":491}
+
+APP 2025-04-04T20:22:13.364Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2352}
+
+APP 2025-04-04T20:22:13.365Z --> calling completion event
+
+APP 2025-04-04T20:22:13.365Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":250,"character":0},"end":{"line":251,"character":0}}}]
+
+APP 2025-04-04T20:22:13.365Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":250,"character":0},"end":{"line":251,"character":0}},"source":"helix-gpt"}]}}
+
+APP 2025-04-04T20:22:13.365Z --> copilot | completion request
+
+APP 2025-04-04T20:22:13.366Z --> fetch | /v1/engines/copilot-codex/completions
+
+APP 2025-04-04T20:22:13.843Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200
+
+APP 2025-04-04T20:22:13.843Z --> completion hints: | can SSH into the VMs without a password.
+
+APP 2025-04-04T20:22:13.844Z --> sent request | {"jsonrpc":"2.0","id":491,"result":{"isIncomplete":false,"items":[{"label":"can SSH into the VMs without a password.","kind":1,"preselect":true,"detail":"can SSH into the VMs without a password.","insertText":"can SSH into the VMs without a password.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":250,"character":48},"end":{"line":250,"character":199}}}]}]}}
+
+APP 2025-04-04T20:22:13.844Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:15.050Z --> received didChange | language: markdown | contentVersion: 2353 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:15.225Z --> received didChange | language: markdown | contentVersion: 2354 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:23.367Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:45.243Z --> received didChange | language: markdown | contentVersion: 2355 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:45.334Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":90,"line":456},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":492}
+
+APP 2025-04-04T20:22:45.475Z --> received didChange | language: markdown | contentVersion: 2356 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:45.535Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, w skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2356}
+
+APP 2025-04-04T20:22:45.535Z --> skipping because content is stale
+
+APP 2025-04-04T20:22:45.536Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:45.536Z --> sent request | {"jsonrpc":"2.0","id":492,"result":{"isIncomplete":false,"items":[]}}
+
+APP 2025-04-04T20:22:45.636Z --> received didChange | language: markdown | contentVersion: 2357 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:45.887Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":92,"line":456},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":493}
+
+APP 2025-04-04T20:22:46.089Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, we skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2357}
+
+APP 2025-04-04T20:22:46.090Z --> calling completion event
+
+APP 2025-04-04T20:22:46.090Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}}}]
+
+APP 2025-04-04T20:22:46.090Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}},"source":"helix-gpt"}]}}
+
+APP 2025-04-04T20:22:46.090Z --> copilot | completion request
+
+APP 2025-04-04T20:22:46.091Z --> fetch | /v1/engines/copilot-codex/completions
+
+APP 2025-04-04T20:22:46.515Z --> received didChange | language: markdown | contentVersion: 2358 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:46.609Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200
+
+APP 2025-04-04T20:22:46.610Z --> completion hints: | need to build it from source. The following commands are executed on the Rocky Linux VMs:
+
+APP 2025-04-04T20:22:46.610Z --> sent request | {"jsonrpc":"2.0","id":493,"result":{"isIncomplete":false,"items":[{"label":"need to build it from source. The following commands are executed on the Rocky Linux VMs:","kind":1,"preselect":true,"detail":"need to build it from source. The following commands are executed on the Rocky Linux VMs:","insertText":"need to build it from source. The following commands are executed on the Rocky Linux VMs:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":456,"character":181},"end":{"line":456,"character":200}}}]}]}}
+
+APP 2025-04-04T20:22:46.610Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:46.627Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":93,"line":456},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":494}
+
+APP 2025-04-04T20:22:46.829Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, we this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2358}
+
+APP 2025-04-04T20:22:46.829Z --> calling completion event
+
+APP 2025-04-04T20:22:46.830Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}}}]
+
+APP 2025-04-04T20:22:46.830Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}},"source":"helix-gpt"}]}}
+
+APP 2025-04-04T20:22:46.830Z --> copilot | completion request
+
+APP 2025-04-04T20:22:46.830Z --> fetch | /v1/engines/copilot-codex/completions
+
+APP 2025-04-04T20:22:47.067Z --> received didChange | language: markdown | contentVersion: 2359 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:47.116Z --> received didChange | language: markdown | contentVersion: 2360 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:47.223Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200
+
+APP 2025-04-04T20:22:47.224Z --> completion hints: | have to use the `sillybench` benchmark again. I installed it in the same way as before and ran the same benchmarks as above.
+
+APP 2025-04-04T20:22:47.225Z --> sent request | {"jsonrpc":"2.0","id":494,"result":{"isIncomplete":false,"items":[{"label":"have to use the `sillybench` benchmark again. I installed it in the same way as before and ran the same benchmarks as above.","kind":1,"preselect":true,"detail":"have to use the `sillybench` benchmark again. I installed it in the same way as before and ran the same benchmarks as above.","insertText":"have to use the `sillybench` benchmark again. I installed it in the same way as before and ran the same benchmarks as above.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":456,"character":217},"end":{"line":456,"character":227}}}]}]}}
+
+APP 2025-04-04T20:22:47.225Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:47.284Z --> received didChange | language: markdown | contentVersion: 2361 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:47.670Z --> received didChange | language: markdown | contentVersion: 2362 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:47.788Z --> received didChange | language: markdown | contentVersion: 2363 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:47.794Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":98,"line":456},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":495}
+
+APP 2025-04-04T20:22:47.995Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, we skip this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2363}
+
+APP 2025-04-04T20:22:47.995Z --> calling completion event
+
+APP 2025-04-04T20:22:47.995Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}}}]
+
+APP 2025-04-04T20:22:47.995Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}},"source":"helix-gpt"}]}}
+
+APP 2025-04-04T20:22:47.995Z --> copilot | completion request
+
+APP 2025-04-04T20:22:47.996Z --> fetch | /v1/engines/copilot-codex/completions
+
+APP 2025-04-04T20:22:48.452Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200
+
+APP 2025-04-04T20:22:48.453Z --> completion hints: | this benchmark for now. If you know how to install it on Rocky Linux, please let me know.
+
+APP 2025-04-04T20:22:48.453Z --> sent request | {"jsonrpc":"2.0","id":495,"result":{"isIncomplete":false,"items":[{"label":"this benchmark for now. If you know how to install it on Rocky Linux, please let me know.","kind":1,"preselect":true,"detail":"this benchmark for now. If you know how to install it on Rocky Linux, please let me know.","insertText":"this benchmark for now. If you know how to install it on Rocky Linux, please let me know.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":456,"character":187},"end":{"line":456,"character":197}}}]}]}}
+
+APP 2025-04-04T20:22:48.453Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:54.225Z --> received didChange | language: markdown | contentVersion: 2364 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:54.334Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":184,"line":460},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":496}
+
+APP 2025-04-04T20:22:54.464Z --> received didChange | language: markdown | contentVersion: 2365 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:54.536Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, we skip this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, w can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2365}
+
+APP 2025-04-04T20:22:54.536Z --> skipping because content is stale
+
+APP 2025-04-04T20:22:54.537Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:54.537Z --> sent request | {"jsonrpc":"2.0","id":496,"result":{"isIncomplete":false,"items":[]}}
+
+APP 2025-04-04T20:22:54.633Z --> received didChange | language: markdown | contentVersion: 2366 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:54.883Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":186,"line":460},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":497}
+
+APP 2025-04-04T20:22:55.084Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, we skip this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, we can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2366}
+
+APP 2025-04-04T20:22:55.084Z --> calling completion event
+
+APP 2025-04-04T20:22:55.084Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":460,"character":0},"end":{"line":461,"character":0}}}]
+
+APP 2025-04-04T20:22:55.085Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":460,"character":0},"end":{"line":461,"character":0}},"source":"helix-gpt"}]}}
+
+APP 2025-04-04T20:22:55.085Z --> copilot | completion request
+
+APP 2025-04-04T20:22:55.085Z --> fetch | /v1/engines/copilot-codex/completions
+
+APP 2025-04-04T20:22:55.722Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200
+
+APP 2025-04-04T20:22:55.724Z --> completion hints: | can run various workloads, including k3s, which is a lightweight Kubernetes distribution. This setup allows us to keep our FreeBSD hosts clean and focused on their primary tasks while still benefiting from the flexibility of virtualization.
+
+APP 2025-04-04T20:22:55.724Z --> sent request | {"jsonrpc":"2.0","id":497,"result":{"isIncomplete":false,"items":[{"label":"can run various workloads, including k3s, which is a lightweight Kubernetes distribution. This setup allows us to keep our FreeBSD hosts clean and focused on their primary tasks while still benefiting from the flexibility of virtualization.","kind":1,"preselect":true,"detail":"can run various workloads, including k3s, which is a lightweight Kubernetes distribution. This setup allows us to keep our FreeBSD hosts clean and focused on their primary tasks while still benefiting from the flexibility of virtualization.","insertText":"can run various workloads, including k3s, which is a lightweight Kubernetes distribution. This setup allows us to keep our FreeBSD hosts clean and focused on their primary tasks while still benefiting from the flexibility of virtualization.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":460,"character":426},"end":{"line":460,"character":545}}}]}]}}
+
+APP 2025-04-04T20:22:55.724Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:56.090Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:56.832Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:57.996Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:23:05.086Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+