APP 2025-04-04T18:41:23.984Z --> triggerCharacters: | ["{","("," "] APP 2025-04-04T18:41:23.990Z --> received request: | {"jsonrpc":"2.0","method":"initialize","params":{"capabilities":{"general":{"positionEncodings":["utf-8","utf-32","utf-16"]},"textDocument":{"codeAction":{"codeActionLiteralSupport":{"codeActionKind":{"valueSet":["","quickfix","refactor","refactor.extract","refactor.inline","refactor.rewrite","source","source.organizeImports"]}},"dataSupport":true,"disabledSupport":true,"isPreferredSupport":true,"resolveSupport":{"properties":["edit","command"]}},"completion":{"completionItem":{"deprecatedSupport":true,"insertReplaceSupport":true,"resolveSupport":{"properties":["documentation","detail","additionalTextEdits"]},"snippetSupport":true,"tagSupport":{"valueSet":[1]}},"completionItemKind":{}},"formatting":{"dynamicRegistration":false},"hover":{"contentFormat":["markdown"]},"inlayHint":{"dynamicRegistration":false},"publishDiagnostics":{"tagSupport":{"valueSet":[1,2]},"versionSupport":true},"rename":{"dynamicRegistration":false,"honorsChangeAnnotations":false,"prepareSupport":true},"signatureHelp":{"signatureInformation":{"activeParameterSupport":true,"documentationFormat":["markdown"],"parameterInformation":{"labelOffsetSupport":true}}}},"window":{"workDoneProgress":true},"workspace":{"applyEdit":true,"configuration":true,"didChangeConfiguration":{"dynamicRegistration":false},"didChangeWatchedFiles":{"dynamicRegistration":true,"relativePatternSupport":false},"executeCommand":{"dynamicRegistration":false},"fileOperations":{"didRename":true,"willRename":true},"inlayHint":{"refreshSupport":false},"symbol":{"dynamicRegistration":false},"workspaceEdit":{"documentChanges":true,"failureHandling":"abort","normalizesLineEndings":false,"resourceOperations":["create","rename","delete"]},"workspaceFolders":true}},"clientInfo":{"name":"helix","version":"25.01.1"},"processId":1643459,"rootPath":"/home/paul/git/foo.zone-content/gemtext","rootUri":"file:///home/paul/git/foo.zone-content/gemtext","workspaceFolders":[{"name":"gemtext","uri":"file:///home/paul/git/foo.zone-content/gemtext"}]},"id":0} APP 2025-04-04T18:41:23.991Z --> sent request | {"jsonrpc":"2.0","id":0,"result":{"capabilities":{"codeActionProvider":true,"executeCommandProvider":{"commands":["resolveDiagnostics","generateDocs","improveCode","refactorFromComment","writeTest"]},"completionProvider":{"resolveProvider":false,"triggerCharacters":["{","("," "]},"textDocumentSync":{"change":1,"openClose":true}}}} APP 2025-04-04T18:41:23.992Z --> received didOpen | language: markdown APP 2025-04-04T18:51:46.406Z --> received didChange | language: markdown | contentVersion: 370 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:51:49.406Z --> received didChange | language: markdown | contentVersion: 371 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:51:54.832Z --> received didChange | language: markdown | contentVersion: 372 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:51:56.493Z --> received didChange | language: markdown | contentVersion: 373 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:42.936Z --> received didChange | language: markdown | contentVersion: 374 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:43.364Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":59,"line":230},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":1} APP 2025-04-04T18:52:43.568Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, I run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as I intend to run the majority of the workload in the k3s cluster running on those Linux VMs, I give them beefy specs like 4 CPU cores and 14GB RAM), I run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside theto test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":374} APP 2025-04-04T18:52:43.568Z --> calling completion event APP 2025-04-04T18:52:43.568Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":230,"character":0},"end":{"line":231,"character":0}}}] APP 2025-04-04T18:52:43.569Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":230,"character":0},"end":{"line":231,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:52:43.569Z --> copilot | completion request APP 2025-04-04T18:52:43.570Z --> fetch | /copilot_internal/v2/token APP 2025-04-04T18:52:43.574Z --> received didChange | language: markdown | contentVersion: 375 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:43.678Z --> failed to parse line: | failed to parse | Content-Length: 13538 APP 2025-04-04T18:52:43.679Z --> received didChange | language: markdown | contentVersion: 376 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:43.798Z --> received didChange | language: markdown | contentVersion: 377 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:43.803Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":62,"line":230},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":2} APP 2025-04-04T18:52:44.004Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, I run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as I intend to run the majority of the workload in the k3s cluster running on those Linux VMs, I give them beefy specs like 4 CPU cores and 14GB RAM), I run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside theVM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":377} APP 2025-04-04T18:52:44.004Z --> calling completion event APP 2025-04-04T18:52:44.004Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":230,"character":0},"end":{"line":231,"character":0}}}] APP 2025-04-04T18:52:44.004Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":230,"character":0},"end":{"line":231,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:52:44.004Z --> copilot | completion request APP 2025-04-04T18:52:44.005Z --> fetch | /copilot_internal/v2/token APP 2025-04-04T18:52:44.005Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":230,"character":0},"end":{"line":231,"character":0}}}] APP 2025-04-04T18:52:44.006Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":230,"character":0},"end":{"line":231,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:52:44.181Z --> received didChange | language: markdown | contentVersion: 378 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:44.360Z --> received didChange | language: markdown | contentVersion: 379 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:44.525Z --> received didChange | language: markdown | contentVersion: 380 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:44.560Z --> response | https://api.github.com/copilot_internal/v2/token | 200 APP 2025-04-04T18:52:44.561Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:52:44.577Z --> received didChange | language: markdown | contentVersion: 381 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:44.584Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":60,"line":230},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":3} APP 2025-04-04T18:52:44.711Z --> received didChange | language: markdown | contentVersion: 382 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:44.785Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, I run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as I intend to run the majority of the workload in the k3s cluster running on those Linux VMs, I give them beefy specs like 4 CPU cores and 14GB RAM), I run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the Vto test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":382} APP 2025-04-04T18:52:44.785Z --> skipping because content is stale APP 2025-04-04T18:52:44.785Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:52:44.785Z --> sent request | {"jsonrpc":"2.0","id":3,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:52:44.805Z --> received didChange | language: markdown | contentVersion: 383 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:44.886Z --> received didChange | language: markdown | contentVersion: 384 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:52:44.892Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":63,"line":230},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":4} APP 2025-04-04T18:52:45.093Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, I run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as I intend to run the majority of the workload in the k3s cluster running on those Linux VMs, I give them beefy specs like 4 CPU cores and 14GB RAM), I run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":384} APP 2025-04-04T18:52:45.094Z --> calling completion event APP 2025-04-04T18:52:45.094Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":230,"character":0},"end":{"line":231,"character":0}}}] APP 2025-04-04T18:52:45.094Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":230,"character":0},"end":{"line":231,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:52:45.094Z --> copilot | completion request APP 2025-04-04T18:52:45.094Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:52:45.095Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":230,"character":0},"end":{"line":231,"character":0}}}] APP 2025-04-04T18:52:45.095Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":230,"character":0},"end":{"line":231,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:52:45.992Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:52:45.993Z --> completion hints: | and then connected to it via SSH from my laptop: APP 2025-04-04T18:52:45.994Z --> sent request | {"jsonrpc":"2.0","id":4,"result":{"isIncomplete":false,"items":[{"label":"and then connected to it via SSH from my laptop:","kind":1,"preselect":true,"detail":"and then connected to it via SSH from my laptop:","insertText":"and then connected to it via SSH from my laptop:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":230,"character":111},"end":{"line":230,"character":177}}}]}]}} APP 2025-04-04T18:52:45.994Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:52:53.571Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:52:54.005Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:52:54.005Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:52:55.095Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:52:55.096Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:06.326Z --> received didChange | language: markdown | contentVersion: 385 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:13.175Z --> received didChange | language: markdown | contentVersion: 386 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:13.270Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":70,"line":107},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":5} APP 2025-04-04T18:53:13.472Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as intend to run the majority of the workload in the k3s cluster running on those Linux VMs, I give them beefy specs like 4 CPU cores and 14GB RAM), I run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":386} APP 2025-04-04T18:53:13.473Z --> calling completion event APP 2025-04-04T18:53:13.473Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}}}] APP 2025-04-04T18:53:13.473Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:13.473Z --> copilot | completion request APP 2025-04-04T18:53:13.474Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:13.656Z --> received didChange | language: markdown | contentVersion: 387 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:13.839Z --> received didChange | language: markdown | contentVersion: 388 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:13.844Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:53:13.845Z --> completion hints: | the memory is too small), I changed the configuration to this: APP 2025-04-04T18:53:13.846Z --> sent request | {"jsonrpc":"2.0","id":5,"result":{"isIncomplete":false,"items":[{"label":"the memory is too small), I changed the configuration to this:","kind":1,"preselect":true,"detail":"the memory is too small), I changed the configuration to this:","insertText":"the memory is too small), I changed the configuration to this:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":107,"character":132},"end":{"line":107,"character":330}}}]}]}} APP 2025-04-04T18:53:13.846Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:16.947Z --> received didChange | language: markdown | contentVersion: 389 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:17.068Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":163,"line":107},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":6} APP 2025-04-04T18:53:17.198Z --> received didChange | language: markdown | contentVersion: 390 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:17.269Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, wgive them beefy specs like 4 CPU cores and 14GB RAM), I run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":390} APP 2025-04-04T18:53:17.269Z --> skipping because content is stale APP 2025-04-04T18:53:17.269Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:17.269Z --> sent request | {"jsonrpc":"2.0","id":6,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:53:17.377Z --> received didChange | language: markdown | contentVersion: 391 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:17.949Z --> received didChange | language: markdown | contentVersion: 392 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:17.955Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":166,"line":107},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":7} APP 2025-04-04T18:53:18.156Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM), I run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":392} APP 2025-04-04T18:53:18.156Z --> calling completion event APP 2025-04-04T18:53:18.156Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}}}] APP 2025-04-04T18:53:18.156Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:18.156Z --> copilot | completion request APP 2025-04-04T18:53:18.157Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:18.556Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:53:18.557Z --> completion hints: | need to increase the CPU and memory), we need to adjust the configuration. I also added a `graphics` section to enable VNC access to the VM. APP 2025-04-04T18:53:18.557Z --> sent request | {"jsonrpc":"2.0","id":7,"result":{"isIncomplete":false,"items":[{"label":"need to increase the CPU and memory), we need to adjust the configuration. I also added a `graphics` section to enable VNC access to the VM.","kind":1,"preselect":true,"detail":"need to increase the CPU and memory), we need to adjust the configuration. I also added a `graphics` section to enable VNC access to the VM.","insertText":"need to increase the CPU and memory), we need to adjust the configuration. I also added a `graphics` section to enable VNC access to the VM.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":107,"character":306},"end":{"line":107,"character":411}}}]}]}} APP 2025-04-04T18:53:18.557Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:19.655Z --> received didChange | language: markdown | contentVersion: 393 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:19.906Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":220,"line":107},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":8} APP 2025-04-04T18:53:20.109Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM), run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":393} APP 2025-04-04T18:53:20.109Z --> calling completion event APP 2025-04-04T18:53:20.109Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}}}] APP 2025-04-04T18:53:20.109Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:20.109Z --> copilot | completion request APP 2025-04-04T18:53:20.110Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:20.705Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:53:20.706Z --> completion hints: | we need to adjust the configuration a bit. I also added a `graphics` section to enable VNC access to the VM, which is useful for the installation process. APP 2025-04-04T18:53:20.707Z --> sent request | {"jsonrpc":"2.0","id":8,"result":{"isIncomplete":false,"items":[{"label":"we need to adjust the configuration a bit. I also added a `graphics` section to enable VNC access to the VM, which is useful for the installation process.","kind":1,"preselect":true,"detail":"we need to adjust the configuration a bit. I also added a `graphics` section to enable VNC access to the VM, which is useful for the installation process.","insertText":"we need to adjust the configuration a bit. I also added a `graphics` section to enable VNC access to the VM, which is useful for the installation process.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":107,"character":374},"end":{"line":107,"character":423}}}]}]}} APP 2025-04-04T18:53:20.707Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:22.772Z --> received didChange | language: markdown | contentVersion: 394 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:23.473Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:28.157Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:30.110Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:32.067Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":220,"line":107},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":9} APP 2025-04-04T18:53:32.269Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":394} APP 2025-04-04T18:53:32.269Z --> calling completion event APP 2025-04-04T18:53:32.269Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}}}] APP 2025-04-04T18:53:32.269Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:32.270Z --> copilot | completion request APP 2025-04-04T18:53:32.270Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:32.290Z --> received didChange | language: markdown | contentVersion: 395 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:32.365Z --> received didChange | language: markdown | contentVersion: 396 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:32.462Z --> received didChange | language: markdown | contentVersion: 397 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:32.468Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":223,"line":107},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":10} APP 2025-04-04T18:53:32.585Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:53:32.586Z --> completion hints: | APP 2025-04-04T18:53:32.586Z --> sent request | {"jsonrpc":"2.0","id":9,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":107,"character":220},"end":{"line":107,"character":269}}}]}]}} APP 2025-04-04T18:53:32.586Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:32.670Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":397} APP 2025-04-04T18:53:32.670Z --> calling completion event APP 2025-04-04T18:53:32.670Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}}}] APP 2025-04-04T18:53:32.670Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:32.670Z --> copilot | completion request APP 2025-04-04T18:53:32.671Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:32.993Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:53:32.994Z --> completion hints: | I changed the configuration to the following: APP 2025-04-04T18:53:32.994Z --> sent request | {"jsonrpc":"2.0","id":10,"result":{"isIncomplete":false,"items":[{"label":"I changed the configuration to the following:","kind":1,"preselect":true,"detail":"I changed the configuration to the following:","insertText":"I changed the configuration to the following:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":107,"character":268},"end":{"line":107,"character":317}}}]}]}} APP 2025-04-04T18:53:32.995Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:34.047Z --> received didChange | language: markdown | contentVersion: 398 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:34.243Z --> failed to parse line: | failed to parse | Content-Length: 13543 APP 2025-04-04T18:53:34.244Z --> received didChange | language: markdown | contentVersion: 399 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:34.404Z --> received didChange | language: markdown | contentVersion: 400 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:34.410Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":226,"line":107},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":11} APP 2025-04-04T18:53:34.612Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":400} APP 2025-04-04T18:53:34.613Z --> calling completion event APP 2025-04-04T18:53:34.613Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}}}] APP 2025-04-04T18:53:34.613Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":107,"character":0},"end":{"line":108,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:34.613Z --> copilot | completion request APP 2025-04-04T18:53:34.613Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:34.938Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:53:34.939Z --> completion hints: | change the configuration to the following: APP 2025-04-04T18:53:34.939Z --> sent request | {"jsonrpc":"2.0","id":11,"result":{"isIncomplete":false,"items":[{"label":"change the configuration to the following:","kind":1,"preselect":true,"detail":"change the configuration to the following:","insertText":"change the configuration to the following:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":107,"character":268},"end":{"line":107,"character":317}}}]}]}} APP 2025-04-04T18:53:34.939Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:39.896Z --> received didChange | language: markdown | contentVersion: 401 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:40.070Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":48,"line":127},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":12} APP 2025-04-04T18:53:40.272Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":401} APP 2025-04-04T18:53:40.273Z --> calling completion event APP 2025-04-04T18:53:40.273Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":127,"character":0},"end":{"line":128,"character":0}}}] APP 2025-04-04T18:53:40.273Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":127,"character":0},"end":{"line":128,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:40.273Z --> copilot | completion request APP 2025-04-04T18:53:40.273Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:40.387Z --> received didChange | language: markdown | contentVersion: 402 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:40.536Z --> received didChange | language: markdown | contentVersion: 403 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:40.776Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:53:40.776Z --> completion hints: | we run the following command: APP 2025-04-04T18:53:40.777Z --> sent request | {"jsonrpc":"2.0","id":12,"result":{"isIncomplete":false,"items":[{"label":"we run the following command:","kind":1,"preselect":true,"detail":"we run the following command:","insertText":"we run the following command:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":127,"character":77},"end":{"line":127,"character":82}}}]}]}} APP 2025-04-04T18:53:40.777Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:42.270Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:42.671Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:42.945Z --> received didChange | language: markdown | contentVersion: 404 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:43.070Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":49,"line":143},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":13} APP 2025-04-04T18:53:43.197Z --> received didChange | language: markdown | contentVersion: 405 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:43.271Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so wconnected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":405} APP 2025-04-04T18:53:43.271Z --> skipping because content is stale APP 2025-04-04T18:53:43.271Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:43.271Z --> sent request | {"jsonrpc":"2.0","id":13,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:53:43.385Z --> received didChange | language: markdown | contentVersion: 406 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:43.636Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":51,"line":143},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":14} APP 2025-04-04T18:53:43.838Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so weconnected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":406} APP 2025-04-04T18:53:43.839Z --> calling completion event APP 2025-04-04T18:53:43.839Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":143,"character":0},"end":{"line":144,"character":0}}}] APP 2025-04-04T18:53:43.839Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":143,"character":0},"end":{"line":144,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:43.839Z --> copilot | completion request APP 2025-04-04T18:53:43.839Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:44.613Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:44.614Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":143,"character":0},"end":{"line":144,"character":0}}}] APP 2025-04-04T18:53:44.614Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":143,"character":0},"end":{"line":144,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:45.240Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":51,"line":143},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":15} APP 2025-04-04T18:53:45.384Z --> received didChange | language: markdown | contentVersion: 407 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:45.389Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":52,"line":143},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":16} APP 2025-04-04T18:53:45.589Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so we connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":407} APP 2025-04-04T18:53:45.590Z --> calling completion event APP 2025-04-04T18:53:45.590Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":143,"character":0},"end":{"line":144,"character":0}}}] APP 2025-04-04T18:53:45.590Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":143,"character":0},"end":{"line":144,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:45.590Z --> copilot | completion request APP 2025-04-04T18:53:45.590Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:46.306Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:53:46.307Z --> completion hints: | can connect to the VM via VNC. The `sockstat` command shows that the port is open and listening. APP 2025-04-04T18:53:46.307Z --> sent request | {"jsonrpc":"2.0","id":16,"result":{"isIncomplete":false,"items":[{"label":"can connect to the VM via VNC. The `sockstat` command shows that the port is open and listening.","kind":1,"preselect":true,"detail":"can connect to the VM via VNC. The `sockstat` command shows that the port is open and listening.","insertText":"can connect to the VM via VNC. The `sockstat` command shows that the port is open and listening.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":143,"character":148},"end":{"line":143,"character":393}}}]}]}} APP 2025-04-04T18:53:46.307Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:48.351Z --> received didChange | language: markdown | contentVersion: 408 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:48.451Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":84,"line":147},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":17} APP 2025-04-04T18:53:48.653Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so we connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":408} APP 2025-04-04T18:53:48.654Z --> calling completion event APP 2025-04-04T18:53:48.654Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":147,"character":0},"end":{"line":148,"character":0}}}] APP 2025-04-04T18:53:48.654Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":147,"character":0},"end":{"line":148,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:48.654Z --> copilot | completion request APP 2025-04-04T18:53:48.655Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:48.736Z --> received didChange | language: markdown | contentVersion: 409 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:48.935Z --> received didChange | language: markdown | contentVersion: 410 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:49.173Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:53:49.174Z --> completion hints: | I increased it to 100G. This is done by stopping the VM and running the following command: APP 2025-04-04T18:53:49.174Z --> sent request | {"jsonrpc":"2.0","id":17,"result":{"isIncomplete":false,"items":[{"label":"I increased it to 100G. This is done by stopping the VM and running the following command:","kind":1,"preselect":true,"detail":"I increased it to 100G. This is done by stopping the VM and running the following command:","insertText":"I increased it to 100G. This is done by stopping the VM and running the following command:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":147,"character":174},"end":{"line":147,"character":288}}}]}]}} APP 2025-04-04T18:53:49.174Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:50.274Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:53.841Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:54.615Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:55.592Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:55.629Z --> received didChange | language: markdown | contentVersion: 411 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:55.794Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":22,"line":157},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":18} APP 2025-04-04T18:53:55.946Z --> received didChange | language: markdown | contentVersion: 412 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:55.996Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so we connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so we stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, wopened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":412} APP 2025-04-04T18:53:55.996Z --> skipping because content is stale APP 2025-04-04T18:53:55.996Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:55.996Z --> sent request | {"jsonrpc":"2.0","id":18,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:53:56.169Z --> received didChange | language: markdown | contentVersion: 413 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:56.420Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":24,"line":157},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":19} APP 2025-04-04T18:53:56.511Z --> received didChange | language: markdown | contentVersion: 414 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:53:56.517Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":25,"line":157},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":20} APP 2025-04-04T18:53:56.718Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so we connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so we stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, we opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":414} APP 2025-04-04T18:53:56.718Z --> calling completion event APP 2025-04-04T18:53:56.718Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":157,"character":0},"end":{"line":158,"character":0}}}] APP 2025-04-04T18:53:56.718Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":157,"character":0},"end":{"line":158,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:53:56.718Z --> copilot | completion request APP 2025-04-04T18:53:56.719Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:53:57.343Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:53:57.345Z --> completion hints: | used a VNC client to connect to the VM. I used `Remmina` on my Linux laptop, but there are many other options available. The IP address of the VM is ` APP 2025-04-04T18:53:57.345Z --> sent request | {"jsonrpc":"2.0","id":20,"result":{"isIncomplete":false,"items":[{"label":"used a VNC client to connect to the VM. I used `Remmina` on my Linux laptop, but there are many other options available. The IP address of the VM is `","kind":1,"preselect":true,"detail":"used a VNC client to connect to the VM. I used `Remmina` on my Linux laptop, but there are many other options available. The IP address of the VM is `","insertText":"used a VNC client to connect to the VM. I used `Remmina` on my Linux laptop, but there are many other options available. The IP address of the VM is `","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":157,"character":175},"end":{"line":157,"character":541}}}]}]}} APP 2025-04-04T18:53:57.345Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:53:58.658Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:06.720Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:11.539Z --> received didChange | language: markdown | contentVersion: 415 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:12.101Z --> received didChange | language: markdown | contentVersion: 416 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:12.325Z --> received didChange | language: markdown | contentVersion: 417 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:12.577Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":159},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":21} APP 2025-04-04T18:54:12.780Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so we connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so we stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, we opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nWe primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":417} APP 2025-04-04T18:54:12.780Z --> calling completion event APP 2025-04-04T18:54:12.780Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":159,"character":0},"end":{"line":160,"character":0}}}] APP 2025-04-04T18:54:12.780Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":159,"character":0},"end":{"line":160,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:54:12.780Z --> copilot | completion request APP 2025-04-04T18:54:12.781Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:54:13.433Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:54:13.435Z --> completion hints: | also had to set the network interface to `enp0s5` (the default is `ens3`), as Rocky Linux 9 uses `NetworkManager` and not `systemd-networkd`. The installation was pretty straightforward, and I didn't run into any issues. I used the following settings: APP 2025-04-04T18:54:13.435Z --> sent request | {"jsonrpc":"2.0","id":21,"result":{"isIncomplete":false,"items":[{"label":"also had to set the network interface to `enp0s5` (the default is `ens3`), as Rocky Linux 9 uses `NetworkManager` and not `systemd-networkd`. The installation was pretty straightforward, and I didn't run into any issues. I used the following settings:","kind":1,"preselect":true,"detail":"also had to set the network interface to `enp0s5` (the default is `ens3`), as Rocky Linux 9 uses `NetworkManager` and not `systemd-networkd`. The installation was pretty straightforward, and I didn't run into any issues. I used the following settings:","insertText":"also had to set the network interface to `enp0s5` (the default is `ens3`), as Rocky Linux 9 uses `NetworkManager` and not `systemd-networkd`. The installation was pretty straightforward, and I didn't run into any issues. I used the following settings:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":159,"character":253},"end":{"line":159,"character":405}}}]}]}} APP 2025-04-04T18:54:13.435Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:16.834Z --> received didChange | language: markdown | contentVersion: 418 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:17.013Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":13,"line":159},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":22} APP 2025-04-04T18:54:17.073Z --> received didChange | language: markdown | contentVersion: 419 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:17.214Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so we connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so we stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, we opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nWe primarily sthe default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":419} APP 2025-04-04T18:54:17.214Z --> skipping because content is stale APP 2025-04-04T18:54:17.214Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:17.214Z --> sent request | {"jsonrpc":"2.0","id":22,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:17.270Z --> received didChange | language: markdown | contentVersion: 420 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:17.376Z --> received didChange | language: markdown | contentVersion: 421 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:17.459Z --> received didChange | language: markdown | contentVersion: 422 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:17.553Z --> received didChange | language: markdown | contentVersion: 423 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:17.751Z --> received didChange | language: markdown | contentVersion: 424 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:17.875Z --> received didChange | language: markdown | contentVersion: 425 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:17.882Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":20,"line":159},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":23} APP 2025-04-04T18:54:18.084Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so we connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so we stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, we opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nWe primarily select the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux, but it doesn't reach reach \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":425} APP 2025-04-04T18:54:18.085Z --> calling completion event APP 2025-04-04T18:54:18.085Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":159,"character":0},"end":{"line":160,"character":0}}}] APP 2025-04-04T18:54:18.085Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":159,"character":0},"end":{"line":160,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:54:18.085Z --> copilot | completion request APP 2025-04-04T18:54:18.086Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:54:18.459Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:54:18.459Z --> completion hints: | the following options: APP 2025-04-04T18:54:18.460Z --> sent request | {"jsonrpc":"2.0","id":23,"result":{"isIncomplete":false,"items":[{"label":"the following options:","kind":1,"preselect":true,"detail":"the following options:","insertText":"the following options:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":159,"character":42},"end":{"line":159,"character":174}}}]}]}} APP 2025-04-04T18:54:18.460Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:22.303Z --> received didChange | language: markdown | contentVersion: 426 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:22.780Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:23.737Z --> received didChange | language: markdown | contentVersion: 427 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:25.530Z --> received didChange | language: markdown | contentVersion: 428 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:26.034Z --> received didChange | language: markdown | contentVersion: 429 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:26.064Z --> received didChange | language: markdown | contentVersion: 430 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:26.089Z --> received didChange | language: markdown | contentVersion: 431 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:26.116Z --> received didChange | language: markdown | contentVersion: 432 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:26.145Z --> received didChange | language: markdown | contentVersion: 433 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:26.173Z --> received didChange | language: markdown | contentVersion: 434 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:26.207Z --> received didChange | language: markdown | contentVersion: 435 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:26.240Z --> received didChange | language: markdown | contentVersion: 436 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:26.271Z --> received didChange | language: markdown | contentVersion: 437 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:26.301Z --> received didChange | language: markdown | contentVersion: 438 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:27.628Z --> received didChange | language: markdown | contentVersion: 439 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:28.085Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:29.451Z --> received didChange | language: markdown | contentVersion: 440 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:50.498Z --> received didChange | language: markdown | contentVersion: 441 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:51.268Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":28,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":24} APP 2025-04-04T18:54:51.300Z --> received didChange | language: markdown | contentVersion: 442 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:51.470Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux!\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":442} APP 2025-04-04T18:54:51.470Z --> skipping because content is stale APP 2025-04-04T18:54:51.470Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:51.470Z --> sent request | {"jsonrpc":"2.0","id":24,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:51.705Z --> received didChange | language: markdown | contentVersion: 443 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:51.712Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":30,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":25} APP 2025-04-04T18:54:51.887Z --> received didChange | language: markdown | contentVersion: 444 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:51.913Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! A\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":444} APP 2025-04-04T18:54:51.914Z --> skipping because content is stale APP 2025-04-04T18:54:51.914Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:51.914Z --> sent request | {"jsonrpc":"2.0","id":25,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:52.091Z --> received didChange | language: markdown | contentVersion: 445 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:52.171Z --> received didChange | language: markdown | contentVersion: 446 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:52.278Z --> received didChange | language: markdown | contentVersion: 447 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:52.284Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":34,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":26} APP 2025-04-04T18:54:52.472Z --> received didChange | language: markdown | contentVersion: 448 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:52.484Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":448} APP 2025-04-04T18:54:52.484Z --> skipping because content is stale APP 2025-04-04T18:54:52.484Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:52.484Z --> sent request | {"jsonrpc":"2.0","id":26,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:52.559Z --> received didChange | language: markdown | contentVersion: 449 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:52.565Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":36,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":27} APP 2025-04-04T18:54:52.685Z --> received didChange | language: markdown | contentVersion: 450 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:52.767Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I a\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":450} APP 2025-04-04T18:54:52.767Z --> skipping because content is stale APP 2025-04-04T18:54:52.767Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:52.767Z --> sent request | {"jsonrpc":"2.0","id":27,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:52.780Z --> received didChange | language: markdown | contentVersion: 451 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:52.827Z --> received didChange | language: markdown | contentVersion: 452 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:52.833Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":39,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":28} APP 2025-04-04T18:54:52.947Z --> received didChange | language: markdown | contentVersion: 453 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:53.030Z --> received didChange | language: markdown | contentVersion: 454 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:53.034Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am su\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":454} APP 2025-04-04T18:54:53.034Z --> skipping because content is stale APP 2025-04-04T18:54:53.034Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:53.034Z --> sent request | {"jsonrpc":"2.0","id":28,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:53.133Z --> received didChange | language: markdown | contentVersion: 455 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:53.190Z --> received didChange | language: markdown | contentVersion: 456 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:53.309Z --> received didChange | language: markdown | contentVersion: 457 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:53.349Z --> received didChange | language: markdown | contentVersion: 458 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:53.354Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":45,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":29} APP 2025-04-04T18:54:53.550Z --> received didChange | language: markdown | contentVersion: 459 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:53.555Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, t\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":459} APP 2025-04-04T18:54:53.555Z --> skipping because content is stale APP 2025-04-04T18:54:53.556Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:53.556Z --> sent request | {"jsonrpc":"2.0","id":29,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:53.940Z --> received didChange | language: markdown | contentVersion: 460 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.058Z --> received didChange | language: markdown | contentVersion: 461 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.268Z --> received didChange | language: markdown | contentVersion: 462 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.338Z --> received didChange | language: markdown | contentVersion: 463 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.344Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":50,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":30} APP 2025-04-04T18:54:54.478Z --> received didChange | language: markdown | contentVersion: 464 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.535Z --> received didChange | language: markdown | contentVersion: 465 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.544Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that th\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":465} APP 2025-04-04T18:54:54.544Z --> skipping because content is stale APP 2025-04-04T18:54:54.544Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:54.544Z --> sent request | {"jsonrpc":"2.0","id":30,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:54.584Z --> received didChange | language: markdown | contentVersion: 466 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.711Z --> received didChange | language: markdown | contentVersion: 467 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.786Z --> received didChange | language: markdown | contentVersion: 468 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.792Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":55,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":31} APP 2025-04-04T18:54:54.865Z --> received didChange | language: markdown | contentVersion: 469 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.966Z --> received didChange | language: markdown | contentVersion: 470 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:54.994Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":470} APP 2025-04-04T18:54:54.995Z --> skipping because content is stale APP 2025-04-04T18:54:54.995Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:54.995Z --> sent request | {"jsonrpc":"2.0","id":31,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:55.016Z --> received didChange | language: markdown | contentVersion: 471 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:55.022Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":58,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":32} APP 2025-04-04T18:54:55.123Z --> received didChange | language: markdown | contentVersion: 472 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:55.223Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":472} APP 2025-04-04T18:54:55.223Z --> skipping because content is stale APP 2025-04-04T18:54:55.223Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:55.223Z --> sent request | {"jsonrpc":"2.0","id":32,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:55.291Z --> received didChange | language: markdown | contentVersion: 473 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:55.543Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":60,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":33} APP 2025-04-04T18:54:55.549Z --> received didChange | language: markdown | contentVersion: 474 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:55.669Z --> received didChange | language: markdown | contentVersion: 475 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:55.675Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":62,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":34} APP 2025-04-04T18:54:55.761Z --> received didChange | language: markdown | contentVersion: 476 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:55.820Z --> received didChange | language: markdown | contentVersion: 477 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:55.877Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not re\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":477} APP 2025-04-04T18:54:55.877Z --> skipping because content is stale APP 2025-04-04T18:54:55.877Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:55.877Z --> sent request | {"jsonrpc":"2.0","id":34,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:56.002Z --> received didChange | language: markdown | contentVersion: 478 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:56.063Z --> received didChange | language: markdown | contentVersion: 479 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:56.205Z --> received didChange | language: markdown | contentVersion: 480 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:56.390Z --> received didChange | language: markdown | contentVersion: 481 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:56.461Z --> received didChange | language: markdown | contentVersion: 482 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:56.467Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":69,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":35} APP 2025-04-04T18:54:56.549Z --> received didChange | language: markdown | contentVersion: 483 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:56.669Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":483} APP 2025-04-04T18:54:56.670Z --> skipping because content is stale APP 2025-04-04T18:54:56.670Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:56.670Z --> sent request | {"jsonrpc":"2.0","id":35,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:54:56.752Z --> received didChange | language: markdown | contentVersion: 484 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:56.758Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":71,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":36} APP 2025-04-04T18:54:56.959Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":484} APP 2025-04-04T18:54:56.960Z --> calling completion event APP 2025-04-04T18:54:56.960Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":351,"character":0},"end":{"line":352,"character":0}}}] APP 2025-04-04T18:54:56.960Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":351,"character":0},"end":{"line":352,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:54:56.960Z --> copilot | completion request APP 2025-04-04T18:54:56.960Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:54:57.544Z --> received didChange | language: markdown | contentVersion: 485 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:57.929Z --> received didChange | language: markdown | contentVersion: 486 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:57.986Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:54:57.988Z --> completion hints: | realistic benchmark, but it shows that Bhyve is not a performance killer. I am sure that the performance of the VMs is very close to the performance of the host system. APP 2025-04-04T18:54:57.988Z --> sent request | {"jsonrpc":"2.0","id":36,"result":{"isIncomplete":false,"items":[{"label":"realistic benchmark, but it shows that Bhyve is not a performance killer. I am sure that the performance of the VMs is very close to the performance of the host system.","kind":1,"preselect":true,"detail":"realistic benchmark, but it shows that Bhyve is not a performance killer. I am sure that the performance of the VMs is very close to the performance of the host system.","insertText":"realistic benchmark, but it shows that Bhyve is not a performance killer. I am sure that the performance of the VMs is very close to the performance of the host system.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":351,"character":239},"end":{"line":351,"character":239}}}]}]}} APP 2025-04-04T18:54:57.988Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:54:58.293Z --> received didChange | language: markdown | contentVersion: 487 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:58.724Z --> received didChange | language: markdown | contentVersion: 488 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:58.862Z --> received didChange | language: markdown | contentVersion: 489 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:58.939Z --> received didChange | language: markdown | contentVersion: 490 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:59.056Z --> received didChange | language: markdown | contentVersion: 491 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:59.185Z --> received didChange | language: markdown | contentVersion: 492 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:59.823Z --> received didChange | language: markdown | contentVersion: 493 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:54:59.920Z --> received didChange | language: markdown | contentVersion: 494 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:00.019Z --> received didChange | language: markdown | contentVersion: 495 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:00.024Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":82,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":37} APP 2025-04-04T18:55:00.226Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":495} APP 2025-04-04T18:55:00.226Z --> calling completion event APP 2025-04-04T18:55:00.226Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":351,"character":0},"end":{"line":352,"character":0}}}] APP 2025-04-04T18:55:00.226Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":351,"character":0},"end":{"line":352,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:55:00.226Z --> copilot | completion request APP 2025-04-04T18:55:00.227Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:55:00.297Z --> received didChange | language: markdown | contentVersion: 496 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:00.407Z --> received didChange | language: markdown | contentVersion: 497 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:00.528Z --> received didChange | language: markdown | contentVersion: 498 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:00.635Z --> received didChange | language: markdown | contentVersion: 499 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:00.661Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:55:00.662Z --> completion hints: | benchmark, but it shows that Bhyve is a very efficient hypervisor. I am sure that the performance of the VMs will be good enough for my purposes. APP 2025-04-04T18:55:00.663Z --> sent request | {"jsonrpc":"2.0","id":37,"result":{"isIncomplete":false,"items":[{"label":"benchmark, but it shows that Bhyve is a very efficient hypervisor. I am sure that the performance of the VMs will be good enough for my purposes.","kind":1,"preselect":true,"detail":"benchmark, but it shows that Bhyve is a very efficient hypervisor. I am sure that the performance of the VMs will be good enough for my purposes.","insertText":"benchmark, but it shows that Bhyve is a very efficient hypervisor. I am sure that the performance of the VMs will be good enough for my purposes.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":351,"character":227},"end":{"line":351,"character":227}}}]}]}} APP 2025-04-04T18:55:00.663Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:00.730Z --> received didChange | language: markdown | contentVersion: 500 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:00.977Z --> received didChange | language: markdown | contentVersion: 501 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:01.027Z --> received didChange | language: markdown | contentVersion: 502 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:01.182Z --> received didChange | language: markdown | contentVersion: 503 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:01.224Z --> received didChange | language: markdown | contentVersion: 504 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:01.481Z --> received didChange | language: markdown | contentVersion: 505 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:01.544Z --> received didChange | language: markdown | contentVersion: 506 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:01.549Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":93,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":38} APP 2025-04-04T18:55:01.751Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific benchmark. \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":506} APP 2025-04-04T18:55:01.751Z --> calling completion event APP 2025-04-04T18:55:01.751Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":351,"character":0},"end":{"line":352,"character":0}}}] APP 2025-04-04T18:55:01.751Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":351,"character":0},"end":{"line":352,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:55:01.751Z --> copilot | completion request APP 2025-04-04T18:55:01.752Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:55:02.185Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:55:02.186Z --> completion hints: | But it shows that Bhyve is a very efficient hypervisor and that FreeBSD is a very efficient operating system. APP 2025-04-04T18:55:02.186Z --> sent request | {"jsonrpc":"2.0","id":38,"result":{"isIncomplete":false,"items":[{"label":"But it shows that Bhyve is a very efficient hypervisor and that FreeBSD is a very efficient operating system.","kind":1,"preselect":true,"detail":"But it shows that Bhyve is a very efficient hypervisor and that FreeBSD is a very efficient operating system.","insertText":"But it shows that Bhyve is a very efficient hypervisor and that FreeBSD is a very efficient operating system.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":351,"character":202},"end":{"line":351,"character":202}}}]}]}} APP 2025-04-04T18:55:02.186Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:04.506Z --> received didChange | language: markdown | contentVersion: 507 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:04.603Z --> received didChange | language: markdown | contentVersion: 508 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:04.623Z --> received didChange | language: markdown | contentVersion: 509 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:04.630Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":96,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":39} APP 2025-04-04T18:55:04.831Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific benchmark. So \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":509} APP 2025-04-04T18:55:04.831Z --> calling completion event APP 2025-04-04T18:55:04.831Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":351,"character":0},"end":{"line":352,"character":0}}}] APP 2025-04-04T18:55:04.832Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":351,"character":0},"end":{"line":352,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:55:04.832Z --> copilot | completion request APP 2025-04-04T18:55:04.832Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:55:05.110Z --> received didChange | language: markdown | contentVersion: 510 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:05.257Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:55:05.258Z --> completion hints: | take it with a grain of salt. But it shows that Bhyve is a very efficient hypervisor, and the FreeBSD kernel is very efficient as well. APP 2025-04-04T18:55:05.258Z --> sent request | {"jsonrpc":"2.0","id":39,"result":{"isIncomplete":false,"items":[{"label":"take it with a grain of salt. But it shows that Bhyve is a very efficient hypervisor, and the FreeBSD kernel is very efficient as well.","kind":1,"preselect":true,"detail":"take it with a grain of salt. But it shows that Bhyve is a very efficient hypervisor, and the FreeBSD kernel is very efficient as well.","insertText":"take it with a grain of salt. But it shows that Bhyve is a very efficient hypervisor, and the FreeBSD kernel is very efficient as well.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":351,"character":231},"end":{"line":351,"character":231}}}]}]}} APP 2025-04-04T18:55:05.258Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:05.339Z --> received didChange | language: markdown | contentVersion: 511 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:05.430Z --> received didChange | language: markdown | contentVersion: 512 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:05.512Z --> received didChange | language: markdown | contentVersion: 513 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:05.610Z --> received didChange | language: markdown | contentVersion: 514 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:05.616Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":101,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":40} APP 2025-04-04T18:55:05.695Z --> received didChange | language: markdown | contentVersion: 515 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:05.818Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific benchmark. So take t\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":515} APP 2025-04-04T18:55:05.818Z --> skipping because content is stale APP 2025-04-04T18:55:05.818Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:05.818Z --> sent request | {"jsonrpc":"2.0","id":40,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:55:05.825Z --> received didChange | language: markdown | contentVersion: 516 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:05.889Z --> received didChange | language: markdown | contentVersion: 517 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:06.010Z --> received didChange | language: markdown | contentVersion: 518 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:06.015Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":105,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":41} APP 2025-04-04T18:55:06.078Z --> received didChange | language: markdown | contentVersion: 519 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:06.150Z --> received didChange | language: markdown | contentVersion: 520 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:06.216Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific benchmark. So take the re\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":520} APP 2025-04-04T18:55:06.217Z --> skipping because content is stale APP 2025-04-04T18:55:06.217Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:06.217Z --> sent request | {"jsonrpc":"2.0","id":41,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:55:06.294Z --> received didChange | language: markdown | contentVersion: 521 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:06.376Z --> received didChange | language: markdown | contentVersion: 522 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:06.582Z --> received didChange | language: markdown | contentVersion: 523 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:06.617Z --> received didChange | language: markdown | contentVersion: 524 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:06.811Z --> received didChange | language: markdown | contentVersion: 525 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:06.888Z --> received didChange | language: markdown | contentVersion: 526 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:06.894Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":113,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":42} APP 2025-04-04T18:55:06.961Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:07.023Z --> received didChange | language: markdown | contentVersion: 527 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:07.095Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific benchmark. So take the results w\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":527} APP 2025-04-04T18:55:07.095Z --> skipping because content is stale APP 2025-04-04T18:55:07.095Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:07.095Z --> sent request | {"jsonrpc":"2.0","id":42,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:55:07.118Z --> received didChange | language: markdown | contentVersion: 528 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:07.235Z --> received didChange | language: markdown | contentVersion: 529 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:07.322Z --> received didChange | language: markdown | contentVersion: 530 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:07.385Z --> received didChange | language: markdown | contentVersion: 531 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:07.390Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":118,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":43} APP 2025-04-04T18:55:07.593Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific benchmark. So take the results with \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":531} APP 2025-04-04T18:55:07.593Z --> calling completion event APP 2025-04-04T18:55:07.593Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":351,"character":0},"end":{"line":352,"character":0}}}] APP 2025-04-04T18:55:07.593Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":351,"character":0},"end":{"line":352,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:55:07.593Z --> copilot | completion request APP 2025-04-04T18:55:07.594Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:55:07.848Z --> received didChange | language: markdown | contentVersion: 532 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:07.963Z --> received didChange | language: markdown | contentVersion: 533 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:07.969Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":120,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":44} APP 2025-04-04T18:55:08.021Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:55:08.022Z --> completion hints: | a grain of salt. But it shows that Bhyve is a very efficient hypervisor, and the FreeBSD kernel is very efficient as well. APP 2025-04-04T18:55:08.022Z --> sent request | {"jsonrpc":"2.0","id":43,"result":{"isIncomplete":false,"items":[{"label":"a grain of salt. But it shows that Bhyve is a very efficient hypervisor, and the FreeBSD kernel is very efficient as well.","kind":1,"preselect":true,"detail":"a grain of salt. But it shows that Bhyve is a very efficient hypervisor, and the FreeBSD kernel is very efficient as well.","insertText":"a grain of salt. But it shows that Bhyve is a very efficient hypervisor, and the FreeBSD kernel is very efficient as well.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":351,"character":240},"end":{"line":351,"character":240}}}]}]}} APP 2025-04-04T18:55:08.022Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:08.126Z --> received didChange | language: markdown | contentVersion: 534 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:08.170Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific benchmark. So take the results with a b\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":534} APP 2025-04-04T18:55:08.170Z --> skipping because content is stale APP 2025-04-04T18:55:08.171Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:08.171Z --> sent request | {"jsonrpc":"2.0","id":44,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:55:08.287Z --> received didChange | language: markdown | contentVersion: 535 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:08.355Z --> received didChange | language: markdown | contentVersion: 536 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:08.456Z --> received didChange | language: markdown | contentVersion: 537 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:08.461Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":124,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":45} APP 2025-04-04T18:55:08.592Z --> received didChange | language: markdown | contentVersion: 538 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:08.659Z --> received didChange | language: markdown | contentVersion: 539 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:08.662Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific benchmark. So take the results with a bit of\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":539} APP 2025-04-04T18:55:08.662Z --> skipping because content is stale APP 2025-04-04T18:55:08.662Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:08.662Z --> sent request | {"jsonrpc":"2.0","id":45,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:55:08.752Z --> received didChange | language: markdown | contentVersion: 540 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:08.759Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":127,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":46} APP 2025-04-04T18:55:08.881Z --> received didChange | language: markdown | contentVersion: 541 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:08.961Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific benchmark. So take the results with a bit of s\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":541} APP 2025-04-04T18:55:08.961Z --> skipping because content is stale APP 2025-04-04T18:55:08.961Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:08.961Z --> sent request | {"jsonrpc":"2.0","id":46,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:55:08.968Z --> received didChange | language: markdown | contentVersion: 542 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:09.095Z --> received didChange | language: markdown | contentVersion: 543 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:09.239Z --> received didChange | language: markdown | contentVersion: 544 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:09.490Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":131,"line":351},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":47} APP 2025-04-04T18:55:09.520Z --> received didChange | language: markdown | contentVersion: 545 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:09.692Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! And I am sure, that this is not really a scientific benchmark. So take the results with a bit of salt!\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":545} APP 2025-04-04T18:55:09.692Z --> skipping because content is stale APP 2025-04-04T18:55:09.692Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:09.692Z --> sent request | {"jsonrpc":"2.0","id":47,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:55:10.226Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:11.753Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:14.834Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:17.595Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:23.096Z --> received didChange | language: markdown | contentVersion: 546 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:29.261Z --> received didChange | language: markdown | contentVersion: 547 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:29.434Z --> received didChange | language: markdown | contentVersion: 548 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:31.812Z --> received didChange | language: markdown | contentVersion: 549 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:38.383Z --> received didChange | language: markdown | contentVersion: 550 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:38.821Z --> received didChange | language: markdown | contentVersion: 551 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:39.417Z --> received didChange | language: markdown | contentVersion: 552 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:39.423Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":1,"line":352},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":48} APP 2025-04-04T18:55:39.569Z --> received didChange | language: markdown | contentVersion: 553 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:39.574Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":352},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":49} APP 2025-04-04T18:55:39.776Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n##\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":553} APP 2025-04-04T18:55:39.776Z --> calling completion event APP 2025-04-04T18:55:39.776Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":352,"character":0},"end":{"line":353,"character":0}}}] APP 2025-04-04T18:55:39.777Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":352,"character":0},"end":{"line":353,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:55:39.777Z --> copilot | completion request APP 2025-04-04T18:55:39.777Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:55:39.820Z --> received didChange | language: markdown | contentVersion: 554 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:39.825Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":3,"line":352},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":50} APP 2025-04-04T18:55:39.981Z --> received didChange | language: markdown | contentVersion: 555 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:40.027Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## C\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":555} APP 2025-04-04T18:55:40.027Z --> skipping because content is stale APP 2025-04-04T18:55:40.028Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:40.028Z --> sent request | {"jsonrpc":"2.0","id":50,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:55:40.033Z --> received didChange | language: markdown | contentVersion: 556 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:40.075Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:55:40.075Z --> completion hints: | Conclusion APP 2025-04-04T18:55:40.076Z --> sent request | {"jsonrpc":"2.0","id":49,"result":{"isIncomplete":false,"items":[{"label":"Conclusion","kind":1,"preselect":true,"detail":"Conclusion","insertText":"Conclusion","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":352,"character":12},"end":{"line":352,"character":12}}}]}]}} APP 2025-04-04T18:55:40.076Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:40.284Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":5,"line":352},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":51} APP 2025-04-04T18:55:40.435Z --> received didChange | language: markdown | contentVersion: 557 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:40.485Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Con\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":557} APP 2025-04-04T18:55:40.485Z --> skipping because content is stale APP 2025-04-04T18:55:40.485Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:40.485Z --> sent request | {"jsonrpc":"2.0","id":51,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:55:40.537Z --> received didChange | language: markdown | contentVersion: 558 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:40.639Z --> received didChange | language: markdown | contentVersion: 559 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:40.855Z --> received didChange | language: markdown | contentVersion: 560 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:40.922Z --> received didChange | language: markdown | contentVersion: 561 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:41.173Z --> received didChange | language: markdown | contentVersion: 562 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:41.176Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":11,"line":352},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":52} APP 2025-04-04T18:55:41.258Z --> received didChange | language: markdown | contentVersion: 563 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:41.377Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusio\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":563} APP 2025-04-04T18:55:41.377Z --> skipping because content is stale APP 2025-04-04T18:55:41.377Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:55:41.377Z --> sent request | {"jsonrpc":"2.0","id":52,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:55:41.440Z --> received didChange | language: markdown | contentVersion: 564 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:41.658Z --> received didChange | language: markdown | contentVersion: 565 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:41.825Z --> received didChange | language: markdown | contentVersion: 566 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:55:49.779Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:56:29.031Z --> received didChange | language: markdown | contentVersion: 567 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:21.662Z --> received didChange | language: markdown | contentVersion: 568 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:34.654Z --> received didChange | language: markdown | contentVersion: 569 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:34.660Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":87,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":53} APP 2025-04-04T18:57:34.808Z --> received didChange | language: markdown | contentVersion: 570 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:34.861Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting i. Bhyve gives you a reliable way to manage VMs without much hassle. With Linux VMs, you tap into all the cool stuff in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":570} APP 2025-04-04T18:57:34.861Z --> skipping because content is stale APP 2025-04-04T18:57:34.861Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:57:34.861Z --> sent request | {"jsonrpc":"2.0","id":53,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:57:34.870Z --> received didChange | language: markdown | contentVersion: 571 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:34.911Z --> received didChange | language: markdown | contentVersion: 572 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:34.916Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":90,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":54} APP 2025-04-04T18:57:35.055Z --> received didChange | language: markdown | contentVersion: 573 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:35.117Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in m. Bhyve gives you a reliable way to manage VMs without much hassle. With Linux VMs, you tap into all the cool stuff in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":573} APP 2025-04-04T18:57:35.117Z --> skipping because content is stale APP 2025-04-04T18:57:35.117Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:57:35.117Z --> sent request | {"jsonrpc":"2.0","id":54,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:57:35.276Z --> received didChange | language: markdown | contentVersion: 574 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:35.386Z --> received didChange | language: markdown | contentVersion: 575 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:35.401Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":93,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":55} APP 2025-04-04T18:57:35.525Z --> received didChange | language: markdown | contentVersion: 576 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:35.602Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my h. Bhyve gives you a reliable way to manage VMs without much hassle. With Linux VMs, you tap into all the cool stuff in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":576} APP 2025-04-04T18:57:35.602Z --> skipping because content is stale APP 2025-04-04T18:57:35.602Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:57:35.602Z --> sent request | {"jsonrpc":"2.0","id":55,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:57:35.702Z --> received didChange | language: markdown | contentVersion: 577 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:35.894Z --> received didChange | language: markdown | contentVersion: 578 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:36.008Z --> received didChange | language: markdown | contentVersion: 579 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:36.260Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":97,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":56} APP 2025-04-04T18:57:36.461Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home. Bhyve gives you a reliable way to manage VMs without much hassle. With Linux VMs, you tap into all the cool stuff in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":579} APP 2025-04-04T18:57:36.461Z --> calling completion event APP 2025-04-04T18:57:36.461Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:57:36.461Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:57:36.461Z --> copilot | completion request APP 2025-04-04T18:57:36.462Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:57:36.905Z --> received didChange | language: markdown | contentVersion: 580 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:37.117Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:57:37.118Z --> completion hints: | lab. The performance is excellent, and the management of the VMs is easy with `vm-bhyve`. I am looking forward to the next steps in this series, where we will install k3s on those VMs and run some workloads. APP 2025-04-04T18:57:37.118Z --> sent request | {"jsonrpc":"2.0","id":56,"result":{"isIncomplete":false,"items":[{"label":"lab. The performance is excellent, and the management of the VMs is easy with `vm-bhyve`. I am looking forward to the next steps in this series, where we will install k3s on those VMs and run some workloads.","kind":1,"preselect":true,"detail":"lab. The performance is excellent, and the management of the VMs is easy with `vm-bhyve`. I am looking forward to the next steps in this series, where we will install k3s on those VMs and run some workloads.","insertText":"lab. The performance is excellent, and the management of the VMs is easy with `vm-bhyve`. I am looking forward to the next steps in this series, where we will install k3s on those VMs and run some workloads.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":304},"end":{"line":354,"character":842}}}]}]}} APP 2025-04-04T18:57:37.118Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:57:37.336Z --> received didChange | language: markdown | contentVersion: 581 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:37.396Z --> received didChange | language: markdown | contentVersion: 582 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:37.402Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":98,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":57} APP 2025-04-04T18:57:37.582Z --> received didChange | language: markdown | contentVersion: 583 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:37.603Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home l. Bhyve gives you a reliable way to manage VMs without much hassle. With Linux VMs, you tap into all the cool stuff in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":583} APP 2025-04-04T18:57:37.603Z --> skipping because content is stale APP 2025-04-04T18:57:37.604Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:57:37.604Z --> sent request | {"jsonrpc":"2.0","id":57,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:57:37.696Z --> received didChange | language: markdown | contentVersion: 584 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:37.830Z --> received didChange | language: markdown | contentVersion: 585 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:39.256Z --> received didChange | language: markdown | contentVersion: 586 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:44.201Z --> received didChange | language: markdown | contentVersion: 587 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:46.462Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:57:51.564Z --> received didChange | language: markdown | contentVersion: 588 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:51.696Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":181,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":58} APP 2025-04-04T18:57:51.898Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, tap into all the cool stuff in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":588} APP 2025-04-04T18:57:51.898Z --> calling completion event APP 2025-04-04T18:57:51.898Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:57:51.899Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:57:51.899Z --> copilot | completion request APP 2025-04-04T18:57:51.899Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:57:51.981Z --> received didChange | language: markdown | contentVersion: 589 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:52.320Z --> received didChange | language: markdown | contentVersion: 590 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:57:52.325Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":183,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":59} APP 2025-04-04T18:57:52.374Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:57:52.375Z --> completion hints: | we can run k3s and other workloads that are not available on FreeBSD. The performance of Bhyve is excellent, and the CPU benchmark shows that it is very efficient. APP 2025-04-04T18:57:52.376Z --> sent request | {"jsonrpc":"2.0","id":58,"result":{"isIncomplete":false,"items":[{"label":"we can run k3s and other workloads that are not available on FreeBSD. The performance of Bhyve is excellent, and the CPU benchmark shows that it is very efficient.","kind":1,"preselect":true,"detail":"we can run k3s and other workloads that are not available on FreeBSD. The performance of Bhyve is excellent, and the CPU benchmark shows that it is very efficient.","insertText":"we can run k3s and other workloads that are not available on FreeBSD. The performance of Bhyve is excellent, and the CPU benchmark shows that it is very efficient.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":344},"end":{"line":354,"character":794}}}]}]}} APP 2025-04-04T18:57:52.376Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:57:52.526Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":590} APP 2025-04-04T18:57:52.527Z --> calling completion event APP 2025-04-04T18:57:52.527Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:57:52.527Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:57:52.527Z --> copilot | completion request APP 2025-04-04T18:57:52.527Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:57:52.989Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:57:52.990Z --> completion hints: | can run k3s and other workloads that are not available on FreeBSD. The performance of Bhyve is excellent, and the VMs are running smoothly. APP 2025-04-04T18:57:52.990Z --> sent request | {"jsonrpc":"2.0","id":59,"result":{"isIncomplete":false,"items":[{"label":"can run k3s and other workloads that are not available on FreeBSD. The performance of Bhyve is excellent, and the VMs are running smoothly.","kind":1,"preselect":true,"detail":"can run k3s and other workloads that are not available on FreeBSD. The performance of Bhyve is excellent, and the VMs are running smoothly.","insertText":"can run k3s and other workloads that are not available on FreeBSD. The performance of Bhyve is excellent, and the VMs are running smoothly.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":322},"end":{"line":354,"character":772}}}]}]}} APP 2025-04-04T18:57:52.990Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:01.899Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:02.528Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:04.386Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":211,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":60} APP 2025-04-04T18:58:04.587Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":590} APP 2025-04-04T18:58:04.587Z --> calling completion event APP 2025-04-04T18:58:04.588Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:04.588Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:04.588Z --> copilot | completion request APP 2025-04-04T18:58:04.588Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:04.619Z --> received didChange | language: markdown | contentVersion: 591 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:04.627Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":"(","triggerKind":2},"position":{"character":212,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":61} APP 2025-04-04T18:58:04.828Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":591} APP 2025-04-04T18:58:04.829Z --> calling completion event APP 2025-04-04T18:58:04.829Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:04.829Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:04.829Z --> copilot | completion request APP 2025-04-04T18:58:04.829Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:04.830Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:04.830Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:05.584Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:58:05.585Z --> completion hints: | like k3s) that is available in the Linux world. I am looking forward to the next steps in this series, where we will install k3s on those VMs and run some workloads. APP 2025-04-04T18:58:05.586Z --> sent request | {"jsonrpc":"2.0","id":61,"result":{"isIncomplete":false,"items":[{"label":"like k3s) that is available in the Linux world. I am looking forward to the next steps in this series, where we will install k3s on those VMs and run some workloads.","kind":1,"preselect":true,"detail":"like k3s) that is available in the Linux world. I am looking forward to the next steps in this series, where we will install k3s on those VMs and run some workloads.","insertText":"like k3s) that is available in the Linux world. I am looking forward to the next steps in this series, where we will install k3s on those VMs and run some workloads.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":377},"end":{"line":354,"character":799}}}]}]}} APP 2025-04-04T18:58:05.586Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:06.885Z --> received didChange | language: markdown | contentVersion: 592 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:07.010Z --> received didChange | language: markdown | contentVersion: 593 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:07.105Z --> received didChange | language: markdown | contentVersion: 594 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:07.187Z --> received didChange | language: markdown | contentVersion: 595 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:07.306Z --> received didChange | language: markdown | contentVersion: 596 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:07.311Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":217,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":62} APP 2025-04-04T18:58:07.512Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":596} APP 2025-04-04T18:58:07.512Z --> calling completion event APP 2025-04-04T18:58:07.513Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:07.513Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:07.513Z --> copilot | completion request APP 2025-04-04T18:58:07.513Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:08.043Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:58:08.044Z --> completion hints: | k3s) that is available for Linux but not for FreeBSD. I am looking forward to the next steps in this series, where we will install k3s on the VMs and run some workloads on it. APP 2025-04-04T18:58:08.044Z --> sent request | {"jsonrpc":"2.0","id":62,"result":{"isIncomplete":false,"items":[{"label":"k3s) that is available for Linux but not for FreeBSD. I am looking forward to the next steps in this series, where we will install k3s on the VMs and run some workloads on it.","kind":1,"preselect":true,"detail":"k3s) that is available for Linux but not for FreeBSD. I am looking forward to the next steps in this series, where we will install k3s on the VMs and run some workloads on it.","insertText":"k3s) that is available for Linux but not for FreeBSD. I am looking forward to the next steps in this series, where we will install k3s on the VMs and run some workloads on it.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":392},"end":{"line":354,"character":814}}}]}]}} APP 2025-04-04T18:58:08.044Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:08.127Z --> received didChange | language: markdown | contentVersion: 597 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:08.415Z --> received didChange | language: markdown | contentVersion: 598 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:08.618Z --> received didChange | language: markdown | contentVersion: 599 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:08.693Z --> received didChange | language: markdown | contentVersion: 600 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:08.810Z --> received didChange | language: markdown | contentVersion: 601 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:09.793Z --> received didChange | language: markdown | contentVersion: 602 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:09.861Z --> received didChange | language: markdown | contentVersion: 603 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:10.065Z --> received didChange | language: markdown | contentVersion: 604 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:10.126Z --> received didChange | language: markdown | contentVersion: 605 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:10.829Z --> received didChange | language: markdown | contentVersion: 606 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:10.952Z --> received didChange | language: markdown | contentVersion: 607 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:11.049Z --> received didChange | language: markdown | contentVersion: 608 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:11.055Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":229,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":63} APP 2025-04-04T18:58:11.257Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Plus, you’re not stuck with just Linux—Bhyve lets you spin up FreeBSD VMs too, and maybe even a Windows one down the line. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of your hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":608} APP 2025-04-04T18:58:11.257Z --> calling completion event APP 2025-04-04T18:58:11.257Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:11.257Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:11.257Z --> copilot | completion request APP 2025-04-04T18:58:11.258Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:11.731Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:58:11.732Z --> completion hints: | that is not available on FreeBSD yet. The performance of Bhyve is excellent, and I am looking forward to the next steps in this series. APP 2025-04-04T18:58:11.732Z --> sent request | {"jsonrpc":"2.0","id":63,"result":{"isIncomplete":false,"items":[{"label":"that is not available on FreeBSD yet. The performance of Bhyve is excellent, and I am looking forward to the next steps in this series.","kind":1,"preselect":true,"detail":"that is not available on FreeBSD yet. The performance of Bhyve is excellent, and I am looking forward to the next steps in this series.","insertText":"that is not available on FreeBSD yet. The performance of Bhyve is excellent, and I am looking forward to the next steps in this series.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":364},"end":{"line":354,"character":786}}}]}]}} APP 2025-04-04T18:58:11.732Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:14.589Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:14.830Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:14.830Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:17.513Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:21.258Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:30.657Z --> received didChange | language: markdown | contentVersion: 609 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:40.792Z --> received didChange | language: markdown | contentVersion: 610 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:40.886Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":478,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":64} APP 2025-04-04T18:58:41.088Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":610} APP 2025-04-04T18:58:41.089Z --> calling completion event APP 2025-04-04T18:58:41.089Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:41.089Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:41.090Z --> copilot | completion request APP 2025-04-04T18:58:41.090Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:41.099Z --> received didChange | language: markdown | contentVersion: 611 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:41.366Z --> received didChange | language: markdown | contentVersion: 612 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:41.428Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:58:41.428Z --> completion hints: | my home-lab. APP 2025-04-04T18:58:41.428Z --> sent request | {"jsonrpc":"2.0","id":64,"result":{"isIncomplete":false,"items":[{"label":"my home-lab.","kind":1,"preselect":true,"detail":"my home-lab.","insertText":"my home-lab.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":490},"end":{"line":354,"character":535}}}]}]}} APP 2025-04-04T18:58:41.429Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:41.445Z --> received didChange | language: markdown | contentVersion: 613 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:41.452Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":481,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":65} APP 2025-04-04T18:58:41.654Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":613} APP 2025-04-04T18:58:41.654Z --> calling completion event APP 2025-04-04T18:58:41.655Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:41.655Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:41.655Z --> copilot | completion request APP 2025-04-04T18:58:41.655Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:41.939Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:58:41.939Z --> completion hints: | home-lab. APP 2025-04-04T18:58:41.939Z --> sent request | {"jsonrpc":"2.0","id":65,"result":{"isIncomplete":false,"items":[{"label":"home-lab.","kind":1,"preselect":true,"detail":"home-lab.","insertText":"home-lab.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":490},"end":{"line":354,"character":535}}}]}]}} APP 2025-04-04T18:58:41.940Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:46.756Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":297,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":66} APP 2025-04-04T18:58:46.958Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":613} APP 2025-04-04T18:58:46.958Z --> calling completion event APP 2025-04-04T18:58:46.959Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:46.959Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:46.959Z --> copilot | completion request APP 2025-04-04T18:58:46.959Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:47.263Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:58:47.264Z --> completion hints: | APP 2025-04-04T18:58:47.264Z --> sent request | {"jsonrpc":"2.0","id":66,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":297},"end":{"line":354,"character":526}}}]}]}} APP 2025-04-04T18:58:47.264Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:47.300Z --> received didChange | language: markdown | contentVersion: 614 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:47.454Z --> received didChange | language: markdown | contentVersion: 615 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:47.554Z --> received didChange | language: markdown | contentVersion: 616 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:47.655Z --> received didChange | language: markdown | contentVersion: 617 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:47.905Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":301,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":67} APP 2025-04-04T18:58:48.106Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. AlsoThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":617} APP 2025-04-04T18:58:48.106Z --> calling completion event APP 2025-04-04T18:58:48.107Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:48.107Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:48.107Z --> copilot | completion request APP 2025-04-04T18:58:48.107Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:48.592Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:58:48.593Z --> completion hints: | , the performance of Bhyve is excellent, and I am happy with the results of my silly benchmark. I am sure that there are better benchmarks out there, but this was a fun way to test it. APP 2025-04-04T18:58:48.593Z --> sent request | {"jsonrpc":"2.0","id":67,"result":{"isIncomplete":false,"items":[{"label":", the performance of Bhyve is excellent, and I am happy with the results of my silly benchmark. I am sure that there are better benchmarks out there, but this was a fun way to test it.","kind":1,"preselect":true,"detail":", the performance of Bhyve is excellent, and I am happy with the results of my silly benchmark. I am sure that there are better benchmarks out there, but this was a fun way to test it.","insertText":", the performance of Bhyve is excellent, and I am happy with the results of my silly benchmark. I am sure that there are better benchmarks out there, but this was a fun way to test it.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":485},"end":{"line":354,"character":714}}}]}]}} APP 2025-04-04T18:58:48.593Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:48.808Z --> received didChange | language: markdown | contentVersion: 618 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:48.996Z --> received didChange | language: markdown | contentVersion: 619 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:49.148Z --> received didChange | language: markdown | contentVersion: 620 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:49.319Z --> received didChange | language: markdown | contentVersion: 621 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:49.497Z --> received didChange | language: markdown | contentVersion: 622 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:49.606Z --> received didChange | language: markdown | contentVersion: 623 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:49.737Z --> received didChange | language: markdown | contentVersion: 624 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:49.823Z --> received didChange | language: markdown | contentVersion: 625 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:49.933Z --> received didChange | language: markdown | contentVersion: 626 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:49.986Z --> received didChange | language: markdown | contentVersion: 627 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:50.079Z --> received didChange | language: markdown | contentVersion: 628 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:50.084Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":304,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":68} APP 2025-04-04T18:58:50.285Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":628} APP 2025-04-04T18:58:50.285Z --> calling completion event APP 2025-04-04T18:58:50.285Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:50.285Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:50.285Z --> copilot | completion request APP 2025-04-04T18:58:50.286Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:50.377Z --> received didChange | language: markdown | contentVersion: 629 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:50.445Z --> received didChange | language: markdown | contentVersion: 630 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:50.615Z --> received didChange | language: markdown | contentVersion: 631 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:50.641Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:58:50.641Z --> completion hints: | posts will show how to set up the Kubernetes cluster on top of those VMs. APP 2025-04-04T18:58:50.642Z --> sent request | {"jsonrpc":"2.0","id":68,"result":{"isIncomplete":false,"items":[{"label":"posts will show how to set up the Kubernetes cluster on top of those VMs.","kind":1,"preselect":true,"detail":"posts will show how to set up the Kubernetes cluster on top of those VMs.","insertText":"posts will show how to set up the Kubernetes cluster on top of those VMs.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":377},"end":{"line":354,"character":606}}}]}]}} APP 2025-04-04T18:58:50.642Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:50.815Z --> received didChange | language: markdown | contentVersion: 632 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:50.916Z --> received didChange | language: markdown | contentVersion: 633 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:50.921Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":309,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":69} APP 2025-04-04T18:58:51.054Z --> received didChange | language: markdown | contentVersion: 634 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:51.060Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":"(","triggerKind":2},"position":{"character":310,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":70} APP 2025-04-04T18:58:51.090Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:51.261Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":634} APP 2025-04-04T18:58:51.261Z --> calling completion event APP 2025-04-04T18:58:51.261Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:51.261Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:51.261Z --> copilot | completion request APP 2025-04-04T18:58:51.262Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:51.285Z --> received didChange | language: markdown | contentVersion: 635 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:51.381Z --> received didChange | language: markdown | contentVersion: 636 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:51.460Z --> received didChange | language: markdown | contentVersion: 637 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:51.591Z --> received didChange | language: markdown | contentVersion: 638 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:51.597Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":314,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":71} APP 2025-04-04T18:58:51.644Z --> received didChange | language: markdown | contentVersion: 639 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:51.655Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:51.657Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:51.657Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:51.778Z --> received didChange | language: markdown | contentVersion: 640 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:51.798Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out ofThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":640} APP 2025-04-04T18:58:51.798Z --> skipping because content is stale APP 2025-04-04T18:58:51.798Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:51.798Z --> sent request | {"jsonrpc":"2.0","id":71,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:51.919Z --> received didChange | language: markdown | contentVersion: 641 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:51.925Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":317,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":72} APP 2025-04-04T18:58:52.126Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":641} APP 2025-04-04T18:58:52.127Z --> calling completion event APP 2025-04-04T18:58:52.127Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:58:52.127Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:58:52.127Z --> copilot | completion request APP 2025-04-04T18:58:52.127Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:58:52.363Z --> received didChange | language: markdown | contentVersion: 642 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:52.576Z --> failed to parse line: | failed to parse | Content-Length: 14185 APP 2025-04-04T18:58:52.576Z --> received didChange | language: markdown | contentVersion: 643 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:52.720Z --> received didChange | language: markdown | contentVersion: 644 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:52.903Z --> received didChange | language: markdown | contentVersion: 645 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:52.953Z --> received didChange | language: markdown | contentVersion: 646 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:52.995Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:58:52.996Z --> completion hints: | scope of this series) could be to run a FreeBSD VM inside a Linux host or even a Windows VM inside FreeBSD. I am not sure if I will do that, but it is possible. APP 2025-04-04T18:58:52.997Z --> sent request | {"jsonrpc":"2.0","id":72,"result":{"isIncomplete":false,"items":[{"label":"scope of this series) could be to run a FreeBSD VM inside a Linux host or even a Windows VM inside FreeBSD. I am not sure if I will do that, but it is possible.","kind":1,"preselect":true,"detail":"scope of this series) could be to run a FreeBSD VM inside a Linux host or even a Windows VM inside FreeBSD. I am not sure if I will do that, but it is possible.","insertText":"scope of this series) could be to run a FreeBSD VM inside a Linux host or even a Windows VM inside FreeBSD. I am not sure if I will do that, but it is possible.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":477},"end":{"line":354,"character":706}}}]}]}} APP 2025-04-04T18:58:52.997Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:53.070Z --> received didChange | language: markdown | contentVersion: 647 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:53.077Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":323,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":73} APP 2025-04-04T18:58:53.137Z --> received didChange | language: markdown | contentVersion: 648 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:53.237Z --> received didChange | language: markdown | contentVersion: 649 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:53.278Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope foThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":649} APP 2025-04-04T18:58:53.278Z --> skipping because content is stale APP 2025-04-04T18:58:53.278Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:53.278Z --> sent request | {"jsonrpc":"2.0","id":73,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:53.341Z --> received didChange | language: markdown | contentVersion: 650 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:53.462Z --> received didChange | language: markdown | contentVersion: 651 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:53.468Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":327,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":74} APP 2025-04-04T18:58:53.573Z --> received didChange | language: markdown | contentVersion: 652 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:53.670Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for tThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":652} APP 2025-04-04T18:58:53.670Z --> skipping because content is stale APP 2025-04-04T18:58:53.670Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:53.670Z --> sent request | {"jsonrpc":"2.0","id":74,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:53.694Z --> received didChange | language: markdown | contentVersion: 653 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:53.746Z --> received didChange | language: markdown | contentVersion: 654 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:53.836Z --> received didChange | language: markdown | contentVersion: 655 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:53.878Z --> received didChange | language: markdown | contentVersion: 656 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:53.884Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":332,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":75} APP 2025-04-04T18:58:54.019Z --> received didChange | language: markdown | contentVersion: 657 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:54.086Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this bThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":657} APP 2025-04-04T18:58:54.087Z --> skipping because content is stale APP 2025-04-04T18:58:54.087Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:54.087Z --> sent request | {"jsonrpc":"2.0","id":75,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:54.207Z --> received didChange | language: markdown | contentVersion: 658 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:54.358Z --> received didChange | language: markdown | contentVersion: 659 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:54.400Z --> received didChange | language: markdown | contentVersion: 660 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:54.496Z --> received didChange | language: markdown | contentVersion: 661 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:54.502Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":337,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":76} APP 2025-04-04T18:58:54.625Z --> received didChange | language: markdown | contentVersion: 662 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:54.703Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog sThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":662} APP 2025-04-04T18:58:54.703Z --> skipping because content is stale APP 2025-04-04T18:58:54.703Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:54.703Z --> sent request | {"jsonrpc":"2.0","id":76,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:54.742Z --> received didChange | language: markdown | contentVersion: 663 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:54.829Z --> received didChange | language: markdown | contentVersion: 664 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:54.957Z --> received didChange | language: markdown | contentVersion: 665 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:55.013Z --> received didChange | language: markdown | contentVersion: 666 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:55.177Z --> received didChange | language: markdown | contentVersion: 667 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:55.394Z --> received didChange | language: markdown | contentVersion: 668 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:55.429Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":344,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":77} APP 2025-04-04T18:58:55.468Z --> received didChange | language: markdown | contentVersion: 669 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:55.473Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":345,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":78} APP 2025-04-04T18:58:55.582Z --> received didChange | language: markdown | contentVersion: 670 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:55.646Z --> received didChange | language: markdown | contentVersion: 671 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:55.675Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) woThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":671} APP 2025-04-04T18:58:55.675Z --> skipping because content is stale APP 2025-04-04T18:58:55.675Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:55.675Z --> sent request | {"jsonrpc":"2.0","id":78,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:55.739Z --> received didChange | language: markdown | contentVersion: 672 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:55.908Z --> received didChange | language: markdown | contentVersion: 673 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:55.954Z --> received didChange | language: markdown | contentVersion: 674 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.058Z --> received didChange | language: markdown | contentVersion: 675 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.064Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":351,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":79} APP 2025-04-04T18:58:56.192Z --> received didChange | language: markdown | contentVersion: 676 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.235Z --> received didChange | language: markdown | contentVersion: 677 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.264Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would beThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":677} APP 2025-04-04T18:58:56.264Z --> skipping because content is stale APP 2025-04-04T18:58:56.264Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:56.264Z --> sent request | {"jsonrpc":"2.0","id":79,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:56.349Z --> received didChange | language: markdown | contentVersion: 678 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.355Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":354,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":80} APP 2025-04-04T18:58:56.431Z --> received didChange | language: markdown | contentVersion: 679 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.550Z --> received didChange | language: markdown | contentVersion: 680 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.556Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be adThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":680} APP 2025-04-04T18:58:56.556Z --> skipping because content is stale APP 2025-04-04T18:58:56.556Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:56.556Z --> sent request | {"jsonrpc":"2.0","id":80,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:56.656Z --> received didChange | language: markdown | contentVersion: 681 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.702Z --> received didChange | language: markdown | contentVersion: 682 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.855Z --> received didChange | language: markdown | contentVersion: 683 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.930Z --> received didChange | language: markdown | contentVersion: 684 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:56.960Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:56.987Z --> received didChange | language: markdown | contentVersion: 685 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:57.156Z --> received didChange | language: markdown | contentVersion: 686 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:57.224Z --> received didChange | language: markdown | contentVersion: 687 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:57.331Z --> received didChange | language: markdown | contentVersion: 688 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:57.404Z --> received didChange | language: markdown | contentVersion: 689 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:57.409Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":365,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":81} APP 2025-04-04T18:58:57.532Z --> received didChange | language: markdown | contentVersion: 690 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:57.611Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":690} APP 2025-04-04T18:58:57.611Z --> skipping because content is stale APP 2025-04-04T18:58:57.611Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:57.611Z --> sent request | {"jsonrpc":"2.0","id":81,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:57.672Z --> received didChange | language: markdown | contentVersion: 691 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:57.903Z --> received didChange | language: markdown | contentVersion: 692 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:58.027Z --> received didChange | language: markdown | contentVersion: 693 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:58.035Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":369,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":82} APP 2025-04-04T18:58:58.107Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:58.134Z --> received didChange | language: markdown | contentVersion: 694 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:58.237Z --> received didChange | language: markdown | contentVersion: 695 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:58.238Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs foThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":695} APP 2025-04-04T18:58:58.238Z --> skipping because content is stale APP 2025-04-04T18:58:58.238Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:58.238Z --> sent request | {"jsonrpc":"2.0","id":82,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:58.329Z --> received didChange | language: markdown | contentVersion: 696 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:58.411Z --> received didChange | language: markdown | contentVersion: 697 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:58.425Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":373,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":83} APP 2025-04-04T18:58:58.535Z --> received didChange | language: markdown | contentVersion: 698 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:58.626Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for dThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":698} APP 2025-04-04T18:58:58.626Z --> skipping because content is stale APP 2025-04-04T18:58:58.627Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:58.627Z --> sent request | {"jsonrpc":"2.0","id":83,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:58:58.633Z --> received didChange | language: markdown | contentVersion: 699 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:58.772Z --> received didChange | language: markdown | contentVersion: 700 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:58.911Z --> received didChange | language: markdown | contentVersion: 701 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:59.150Z --> received didChange | language: markdown | contentVersion: 702 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:59.252Z --> received didChange | language: markdown | contentVersion: 703 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:59.388Z --> received didChange | language: markdown | contentVersion: 704 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:59.515Z --> received didChange | language: markdown | contentVersion: 705 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:59.572Z --> received didChange | language: markdown | contentVersion: 706 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:59.687Z --> received didChange | language: markdown | contentVersion: 707 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:59.693Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":383,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":84} APP 2025-04-04T18:58:59.806Z --> received didChange | language: markdown | contentVersion: 708 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:59.864Z --> received didChange | language: markdown | contentVersion: 709 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:58:59.895Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different woThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":709} APP 2025-04-04T18:58:59.895Z --> skipping because content is stale APP 2025-04-04T18:58:59.895Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:58:59.895Z --> sent request | {"jsonrpc":"2.0","id":84,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:00.019Z --> received didChange | language: markdown | contentVersion: 710 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:00.090Z --> received didChange | language: markdown | contentVersion: 711 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:00.285Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:00.341Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":387,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":85} APP 2025-04-04T18:59:00.542Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":711} APP 2025-04-04T18:59:00.543Z --> calling completion event APP 2025-04-04T18:59:00.543Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:00.543Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:00.543Z --> copilot | completion request APP 2025-04-04T18:59:00.543Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:01.262Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:01.263Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:01.263Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:01.359Z --> received didChange | language: markdown | contentVersion: 712 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:01.533Z --> received didChange | language: markdown | contentVersion: 713 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:01.582Z --> received didChange | language: markdown | contentVersion: 714 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:01.658Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:01.760Z --> received didChange | language: markdown | contentVersion: 715 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:01.828Z --> received didChange | language: markdown | contentVersion: 716 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:02.127Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:02.176Z --> received didChange | language: markdown | contentVersion: 717 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:02.221Z --> received didChange | language: markdown | contentVersion: 718 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:02.228Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":394,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":86} APP 2025-04-04T18:59:02.429Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":718} APP 2025-04-04T18:59:02.429Z --> calling completion event APP 2025-04-04T18:59:02.429Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:02.429Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:02.429Z --> copilot | completion request APP 2025-04-04T18:59:02.430Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:02.504Z --> received didChange | language: markdown | contentVersion: 719 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:02.931Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:59:02.932Z --> completion hints: | APP 2025-04-04T18:59:02.932Z --> sent request | {"jsonrpc":"2.0","id":86,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":394},"end":{"line":354,"character":623}}}]}]}} APP 2025-04-04T18:59:02.932Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:03.075Z --> received didChange | language: markdown | contentVersion: 720 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:03.210Z --> received didChange | language: markdown | contentVersion: 721 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:03.307Z --> received didChange | language: markdown | contentVersion: 722 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:03.361Z --> received didChange | language: markdown | contentVersion: 723 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:03.367Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":399,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":87} APP 2025-04-04T18:59:03.569Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":723} APP 2025-04-04T18:59:03.569Z --> calling completion event APP 2025-04-04T18:59:03.569Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:03.569Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:03.570Z --> copilot | completion request APP 2025-04-04T18:59:03.570Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:03.635Z --> received didChange | language: markdown | contentVersion: 724 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:03.814Z --> received didChange | language: markdown | contentVersion: 725 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:03.994Z --> received didChange | language: markdown | contentVersion: 726 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:03.999Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":402,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":88} APP 2025-04-04T18:59:04.154Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:59:04.156Z --> completion hints: | I could run a FreeBSD VM inside the Bhyve hypervisor to run FreeBSD workloads. I could also run a Windows VM for some Windows workloads. But I am not sure if I will do that in the future. APP 2025-04-04T18:59:04.156Z --> sent request | {"jsonrpc":"2.0","id":87,"result":{"isIncomplete":false,"items":[{"label":"I could run a FreeBSD VM inside the Bhyve hypervisor to run FreeBSD workloads. I could also run a Windows VM for some Windows workloads. But I am not sure if I will do that in the future.","kind":1,"preselect":true,"detail":"I could run a FreeBSD VM inside the Bhyve hypervisor to run FreeBSD workloads. I could also run a Windows VM for some Windows workloads. But I am not sure if I will do that in the future.","insertText":"I could run a FreeBSD VM inside the Bhyve hypervisor to run FreeBSD workloads. I could also run a Windows VM for some Windows workloads. But I am not sure if I will do that in the future.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":586},"end":{"line":354,"character":815}}}]}]}} APP 2025-04-04T18:59:04.156Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:04.201Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. ho This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":726} APP 2025-04-04T18:59:04.202Z --> calling completion event APP 2025-04-04T18:59:04.202Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:04.202Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:04.202Z --> copilot | completion request APP 2025-04-04T18:59:04.203Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:04.277Z --> received didChange | language: markdown | contentVersion: 727 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:04.510Z --> received didChange | language: markdown | contentVersion: 728 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:04.597Z --> received didChange | language: markdown | contentVersion: 729 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:04.602Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":403,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":89} APP 2025-04-04T18:59:04.728Z --> received didChange | language: markdown | contentVersion: 730 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:04.804Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how aThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":730} APP 2025-04-04T18:59:04.804Z --> skipping because content is stale APP 2025-04-04T18:59:04.805Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:04.805Z --> sent request | {"jsonrpc":"2.0","id":89,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:04.835Z --> received didChange | language: markdown | contentVersion: 731 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:04.874Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:59:04.875Z --> completion hints: | st a web server or a database server. I could also run a FreeBSD VM inside Bhyve, but I don't see the need for that at the moment. APP 2025-04-04T18:59:04.875Z --> sent request | {"jsonrpc":"2.0","id":88,"result":{"isIncomplete":false,"items":[{"label":"st a web server or a database server. I could also run a FreeBSD VM inside Bhyve, but I don't see the need for that at the moment.","kind":1,"preselect":true,"detail":"st a web server or a database server. I could also run a FreeBSD VM inside Bhyve, but I don't see the need for that at the moment.","insertText":"st a web server or a database server. I could also run a FreeBSD VM inside Bhyve, but I don't see the need for that at the moment.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":532},"end":{"line":354,"character":761}}}]}]}} APP 2025-04-04T18:59:04.875Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:05.012Z --> received didChange | language: markdown | contentVersion: 732 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:05.075Z --> received didChange | language: markdown | contentVersion: 733 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:05.148Z --> received didChange | language: markdown | contentVersion: 734 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:05.229Z --> received didChange | language: markdown | contentVersion: 735 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:05.234Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":409,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":90} APP 2025-04-04T18:59:05.356Z --> received didChange | language: markdown | contentVersion: 736 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:05.435Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about aThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":736} APP 2025-04-04T18:59:05.436Z --> skipping because content is stale APP 2025-04-04T18:59:05.436Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:05.436Z --> sent request | {"jsonrpc":"2.0","id":90,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:05.438Z --> received didChange | language: markdown | contentVersion: 737 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:05.444Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":411,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":91} APP 2025-04-04T18:59:05.639Z --> received didChange | language: markdown | contentVersion: 738 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:05.645Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a WThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":738} APP 2025-04-04T18:59:05.645Z --> skipping because content is stale APP 2025-04-04T18:59:05.645Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:05.645Z --> sent request | {"jsonrpc":"2.0","id":91,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:05.706Z --> received didChange | language: markdown | contentVersion: 739 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:05.752Z --> received didChange | language: markdown | contentVersion: 740 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:05.873Z --> received didChange | language: markdown | contentVersion: 741 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:05.988Z --> received didChange | language: markdown | contentVersion: 742 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:06.093Z --> received didChange | language: markdown | contentVersion: 743 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:06.276Z --> received didChange | language: markdown | contentVersion: 744 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:06.527Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":418,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":92} APP 2025-04-04T18:59:06.729Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a WindowsThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":744} APP 2025-04-04T18:59:06.730Z --> calling completion event APP 2025-04-04T18:59:06.730Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:06.730Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:06.730Z --> copilot | completion request APP 2025-04-04T18:59:06.730Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:06.755Z --> received didChange | language: markdown | contentVersion: 745 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:06.761Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":419,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":93} APP 2025-04-04T18:59:06.961Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":745} APP 2025-04-04T18:59:06.962Z --> calling completion event APP 2025-04-04T18:59:06.962Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:06.962Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:06.962Z --> copilot | completion request APP 2025-04-04T18:59:06.962Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:06.963Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:06.963Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:07.014Z --> received didChange | language: markdown | contentVersion: 746 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:07.202Z --> received didChange | language: markdown | contentVersion: 747 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:07.537Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:59:07.537Z --> completion hints: | VM for gaming? Or a FreeBSD VM for running FreeBSD services? The possibilities are endless. APP 2025-04-04T18:59:07.537Z --> sent request | {"jsonrpc":"2.0","id":93,"result":{"isIncomplete":false,"items":[{"label":"VM for gaming? Or a FreeBSD VM for running FreeBSD services? The possibilities are endless.","kind":1,"preselect":true,"detail":"VM for gaming? Or a FreeBSD VM for running FreeBSD services? The possibilities are endless.","insertText":"VM for gaming? Or a FreeBSD VM for running FreeBSD services? The possibilities are endless.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":510},"end":{"line":354,"character":739}}}]}]}} APP 2025-04-04T18:59:07.537Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:07.993Z --> received didChange | language: markdown | contentVersion: 748 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:08.243Z --> received didChange | language: markdown | contentVersion: 749 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:08.248Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":423,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":94} APP 2025-04-04T18:59:08.450Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":749} APP 2025-04-04T18:59:08.450Z --> calling completion event APP 2025-04-04T18:59:08.450Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:08.450Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:08.450Z --> copilot | completion request APP 2025-04-04T18:59:08.451Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:08.675Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:59:08.675Z --> completion hints: | APP 2025-04-04T18:59:08.675Z --> sent request | {"jsonrpc":"2.0","id":94,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":423},"end":{"line":354,"character":652}}}]}]}} APP 2025-04-04T18:59:08.675Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:10.172Z --> received didChange | language: markdown | contentVersion: 750 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:10.489Z --> received didChange | language: markdown | contentVersion: 751 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:10.544Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:10.710Z --> received didChange | language: markdown | contentVersion: 752 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:10.798Z --> received didChange | language: markdown | contentVersion: 753 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:11.049Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":425,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":95} APP 2025-04-04T18:59:11.115Z --> received didChange | language: markdown | contentVersion: 754 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:11.250Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? oThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":754} APP 2025-04-04T18:59:11.250Z --> skipping because content is stale APP 2025-04-04T18:59:11.250Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:11.250Z --> sent request | {"jsonrpc":"2.0","id":95,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:11.264Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:11.276Z --> received didChange | language: markdown | contentVersion: 755 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:11.430Z --> received didChange | language: markdown | contentVersion: 756 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:11.556Z --> received didChange | language: markdown | contentVersion: 757 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:11.653Z --> received didChange | language: markdown | contentVersion: 758 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:11.658Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":426,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":96} APP 2025-04-04T18:59:11.765Z --> received didChange | language: markdown | contentVersion: 759 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:11.855Z --> received didChange | language: markdown | contentVersion: 760 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:11.859Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? Or a This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":760} APP 2025-04-04T18:59:11.859Z --> skipping because content is stale APP 2025-04-04T18:59:11.859Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:11.859Z --> sent request | {"jsonrpc":"2.0","id":96,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:11.862Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":428,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":97} APP 2025-04-04T18:59:12.054Z --> received didChange | language: markdown | contentVersion: 761 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:12.062Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? Or a NThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":761} APP 2025-04-04T18:59:12.062Z --> skipping because content is stale APP 2025-04-04T18:59:12.062Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:12.062Z --> sent request | {"jsonrpc":"2.0","id":97,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:12.146Z --> received didChange | language: markdown | contentVersion: 762 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:12.234Z --> received didChange | language: markdown | contentVersion: 763 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:12.430Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:12.482Z --> received didChange | language: markdown | contentVersion: 764 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:12.533Z --> received didChange | language: markdown | contentVersion: 765 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:12.721Z --> received didChange | language: markdown | contentVersion: 766 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:12.863Z --> received didChange | language: markdown | contentVersion: 767 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:12.869Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":435,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":98} APP 2025-04-04T18:59:13.072Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? Or a NetBSD This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":767} APP 2025-04-04T18:59:13.072Z --> calling completion event APP 2025-04-04T18:59:13.072Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:13.072Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:13.072Z --> copilot | completion request APP 2025-04-04T18:59:13.073Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:13.377Z --> received didChange | language: markdown | contentVersion: 768 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:13.487Z --> received didChange | language: markdown | contentVersion: 769 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:13.564Z --> received didChange | language: markdown | contentVersion: 770 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:13.569Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":438,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":99} APP 2025-04-04T18:59:13.570Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:13.570Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:13.570Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:13.734Z --> received didChange | language: markdown | contentVersion: 771 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:13.771Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? Or a NetBSD VM tThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":771} APP 2025-04-04T18:59:13.771Z --> skipping because content is stale APP 2025-04-04T18:59:13.771Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:13.771Z --> sent request | {"jsonrpc":"2.0","id":99,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:13.823Z --> received didChange | language: markdown | contentVersion: 772 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:13.888Z --> received didChange | language: markdown | contentVersion: 773 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:13.893Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":441,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":100} APP 2025-04-04T18:59:14.069Z --> received didChange | language: markdown | contentVersion: 774 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:14.095Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? Or a NetBSD VM to tThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":774} APP 2025-04-04T18:59:14.095Z --> skipping because content is stale APP 2025-04-04T18:59:14.095Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:14.095Z --> sent request | {"jsonrpc":"2.0","id":100,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:14.151Z --> received didChange | language: markdown | contentVersion: 775 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:14.203Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:14.402Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":443,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":101} APP 2025-04-04T18:59:14.496Z --> received didChange | language: markdown | contentVersion: 776 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:14.604Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? Or a NetBSD VM to tinThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":776} APP 2025-04-04T18:59:14.604Z --> skipping because content is stale APP 2025-04-04T18:59:14.605Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:14.605Z --> sent request | {"jsonrpc":"2.0","id":101,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:14.670Z --> received didChange | language: markdown | contentVersion: 777 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:14.755Z --> received didChange | language: markdown | contentVersion: 778 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:15.006Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":446,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":102} APP 2025-04-04T18:59:15.209Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? Or a NetBSD VM to tinktThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":778} APP 2025-04-04T18:59:15.210Z --> calling completion event APP 2025-04-04T18:59:15.210Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:15.210Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:15.210Z --> copilot | completion request APP 2025-04-04T18:59:15.210Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:15.358Z --> received didChange | language: markdown | contentVersion: 779 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:15.486Z --> received didChange | language: markdown | contentVersion: 780 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:15.583Z --> received didChange | language: markdown | contentVersion: 781 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:15.644Z --> received didChange | language: markdown | contentVersion: 782 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:15.650Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":448,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":103} APP 2025-04-04T18:59:15.729Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:59:15.729Z --> completion hints: | ure with? Or a FreeBSD VM to run some FreeBSD workloads? The possibilities are endless. APP 2025-04-04T18:59:15.730Z --> sent request | {"jsonrpc":"2.0","id":102,"result":{"isIncomplete":false,"items":[{"label":"ure with? Or a FreeBSD VM to run some FreeBSD workloads? The possibilities are endless.","kind":1,"preselect":true,"detail":"ure with? Or a FreeBSD VM to run some FreeBSD workloads? The possibilities are endless.","insertText":"ure with? Or a FreeBSD VM to run some FreeBSD workloads? The possibilities are endless.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":533},"end":{"line":354,"character":762}}}]}]}} APP 2025-04-04T18:59:15.730Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:15.763Z --> received didChange | language: markdown | contentVersion: 783 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:15.821Z --> received didChange | language: markdown | contentVersion: 784 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:15.852Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? Or a NetBSD VM to tinker wiThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":784} APP 2025-04-04T18:59:15.852Z --> skipping because content is stale APP 2025-04-04T18:59:15.853Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:15.853Z --> sent request | {"jsonrpc":"2.0","id":103,"result":{"isIncomplete":false,"items":[]}} APP 2025-04-04T18:59:15.978Z --> received didChange | language: markdown | contentVersion: 785 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:16.055Z --> received didChange | language: markdown | contentVersion: 786 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:16.307Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":452,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":104} APP 2025-04-04T18:59:16.338Z --> received didChange | language: markdown | contentVersion: 787 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:16.398Z --> received didChange | language: markdown | contentVersion: 788 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:16.403Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":454,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":105} APP 2025-04-04T18:59:16.605Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? Or a NetBSD VM to tinker with? This flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":788} APP 2025-04-04T18:59:16.605Z --> calling completion event APP 2025-04-04T18:59:16.606Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:16.606Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:16.606Z --> copilot | completion request APP 2025-04-04T18:59:16.606Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:16.731Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:16.731Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:16.731Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:16.962Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:16.963Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:18.167Z --> received didChange | language: markdown | contentVersion: 789 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:18.332Z --> received didChange | language: markdown | contentVersion: 790 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:18.451Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:23.073Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:23.571Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:25.031Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":295,"line":354},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":106} APP 2025-04-04T18:59:25.210Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:25.233Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nTODO: Why this Distro?\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ on each host.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, I run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home-lab. Bhyve gives a reliable way to manage VMs without much hassle. With Linux VMs, I tap into all the cool stuff (e.g. Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD. Future uses (out of scope for this blog series) would be additional VMs for different workloads. E.g. how about a Windows VM? Or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it’s a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":790} APP 2025-04-04T18:59:25.233Z --> calling completion event APP 2025-04-04T18:59:25.233Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}}}] APP 2025-04-04T18:59:25.233Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":354,"character":0},"end":{"line":355,"character":0}},"source":"helix-gpt"}]}} APP 2025-04-04T18:59:25.233Z --> copilot | completion request APP 2025-04-04T18:59:25.234Z --> fetch | /v1/engines/copilot-codex/completions APP 2025-04-04T18:59:25.756Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 APP 2025-04-04T18:59:25.757Z --> completion hints: | as the host OS. The VMs are lightweight and efficient, which is great for my home-lab setup. APP 2025-04-04T18:59:25.758Z --> sent request | {"jsonrpc":"2.0","id":106,"result":{"isIncomplete":false,"items":[{"label":"as the host OS. The VMs are lightweight and efficient, which is great for my home-lab setup.","kind":1,"preselect":true,"detail":"as the host OS. The VMs are lightweight and efficient, which is great for my home-lab setup.","insertText":"as the host OS. The VMs are lightweight and efficient, which is great for my home-lab setup.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":354,"character":387},"end":{"line":354,"character":545}}}]}]}} APP 2025-04-04T18:59:25.758Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:25.770Z --> received didChange | language: markdown | contentVersion: 791 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:26.237Z --> received didChange | language: markdown | contentVersion: 792 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:26.460Z --> received didChange | language: markdown | contentVersion: 793 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:26.606Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:26.732Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:35.234Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} APP 2025-04-04T18:59:56.286Z --> received didChange | language: markdown | contentVersion: 794 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:58.458Z --> received didChange | language: markdown | contentVersion: 795 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T18:59:59.056Z --> received didChange | language: markdown | contentVersion: 796 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T19:00:00.253Z --> received didChange | language: markdown | contentVersion: 797 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T19:00:02.004Z --> received didChange | language: markdown | contentVersion: 798 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T19:00:03.120Z --> received didChange | language: markdown | contentVersion: 799 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T19:00:07.036Z --> received didChange | language: markdown | contentVersion: 800 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi.tpl APP 2025-04-04T19:00:26.724Z --> received request: | {"jsonrpc":"2.0","method":"shutdown","id":107}