summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--about/resources.gmi194
-rw-r--r--gemfeed/atom.xml4
-rw-r--r--gemfeed/helix-gpt.log184
-rw-r--r--index.gmi2
-rw-r--r--uptime-stats.gmi2
5 files changed, 286 insertions, 100 deletions
diff --git a/about/resources.gmi b/about/resources.gmi
index c680f981..20c178d7 100644
--- a/about/resources.gmi
+++ b/about/resources.gmi
@@ -36,102 +36,102 @@ You won't find any links on this site because, over time, the links will break.
In random order:
-* Concurrency in Go; Katherine Cox-Buday; O'Reilly
-* Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt
-* Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly
-* 100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications
-* 97 things every SRE should know; Emil Stolarsky, Jaime Woo; O'Reilly
-* Terraform Cookbook; Mikael Krief; Packt Publishing
-* Effective awk programming; Arnold Robbins; O'Reilly
-* Higher Order Perl; Mark Dominus; Morgan Kaufmann
-* The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton
-* Developing Games in Java; David Brackeen and others...; New Riders
+* Programming Ruby 3.3 (5th Edition); Noel Rappin, with Dave Thomas; The Pragmatic Bookshelf
+* Perl New Features; Joshua McAdams, brian d foy; Perl School
* Raku Fundamentals; Moritz Lenz; Apress
-* Effective Java; Joshua Bloch; Addison-Wesley Professional
-* Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press
-* Funktionale Programmierung; Peter Pepper; Springer
+* The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress
* Tmux 2: Productive Mouse-free Development; Brain P. Hogan; The Pragmatic Programmers
+* Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly
+* The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional
* The Docker Book; James Turnbull; Kindle
+* Developing Games in Java; David Brackeen and others...; New Riders
* Data Science at the Command Line; Jeroen Janssens; O'Reilly
-* 21st Century C: C Tips from the New School; Ben Klemens; O'Reilly
-* Modern Perl; Chromatic ; Onyx Neon Press
-* Perl New Features; Joshua McAdams, brian d foy; Perl School
+* DNS and BIND; Cricket Liu; O'Reilly
* Raku Recipes; J.J. Merelo; Apress
-* Leanring eBPF; Liz Rice; O'Reilly
-* Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O'Reilly
-* Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner
-* Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers
* The Pragmatic Programmer; David Thomas; Addison-Wesley
-* Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall & Jon Orwant; O'Reilly
-* Pro Puppet; James Turnbull, Jeffrey McCune; Apress
-* The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress
-* DevOps And Site Reliability Engineering Handbook; Stephen Fleming; Audible
-* Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications
-* Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press
-* DNS and BIND; Cricket Liu; O'Reilly
-* The Kubernetes Book; Nigel Poulton; Unabridged Audiobook
-* Programming Ruby 3.3 (5th Edition); Noel Rappin, with Dave Thomas; The Pragmatic Bookshelf
+* 97 things every SRE should know; Emil Stolarsky, Jaime Woo; O'Reilly
+* Modern Perl; Chromatic ; Onyx Neon Press
+* Site Reliability Engineering; How Google runs production systems; O'Reilly
+* Concurrency in Go; Katherine Cox-Buday; O'Reilly
* Polished Ruby Programming; Jeremy Evans; Packt Publishing
* The DevOps Handbook; Gene Kim, Jez Humble, Patrick Debois, John Willis; Audible
-* The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional
-* Ultimate Go Notebook; Bill Kennedy
-* Systemprogrammierung in Go; Frank Müller; dpunkt
+* 21st Century C: C Tips from the New School; Ben Klemens; O'Reilly
+* The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton
+* Terraform Cookbook; Mikael Krief; Packt Publishing
* Java ist auch eine Insel; Christian Ullenboom;
+* Systemprogrammierung in Go; Frank Müller; dpunkt
+* Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers
+* Effective Java; Joshua Bloch; Addison-Wesley Professional
+* Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press
+* Ultimate Go Notebook; Bill Kennedy
+* Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner
+* DevOps And Site Reliability Engineering Handbook; Stephen Fleming; Audible
+* The Kubernetes Book; Nigel Poulton; Unabridged Audiobook
* Kubernetes Cookbook; Sameer Naik, Sébastien Goasguen, Jonathan Michaux; O'Reilly
+* Higher Order Perl; Mark Dominus; Morgan Kaufmann
+* Funktionale Programmierung; Peter Pepper; Springer
* Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O'Reilly
-* C++ Programming Language; Bjarne Stroustrup;
+* Effective awk programming; Arnold Robbins; O'Reilly
+* 100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications
+* Pro Puppet; James Turnbull, Jeffrey McCune; Apress
+* Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall & Jon Orwant; O'Reilly
+* Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O'Reilly
+* Leanring eBPF; Liz Rice; O'Reilly
+* Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press
* Distributed Systems: Principles and Paradigms; Andrew S. Tanenbaum; Pearson
-* Site Reliability Engineering; How Google runs production systems; O'Reilly
+* C++ Programming Language; Bjarne Stroustrup;
+* Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications
+* Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt
## Technical references
I didn't read them from the beginning to the end, but I am using them to look up things. The books are in random order:
-* BPF Performance Tools - Linux System and Application Observability, Brendan Gregg; Addison Wesley
-* Relayd and Httpd Mastery; Michael W Lucas
-* Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O'Reilly
-* Groovy Kurz & Gut; Joerg Staudemeier; O'Reilly
* The Linux Programming Interface; Michael Kerrisk; No Starch Press
* Implementing Service Level Objectives; Alex Hidalgo; O'Reilly
+* Groovy Kurz & Gut; Joerg Staudemeier; O'Reilly
+* Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O'Reilly
+* BPF Performance Tools - Linux System and Application Observability, Brendan Gregg; Addison Wesley
* Algorithms; Robert Sedgewick, Kevin Wayne; Addison Wesley
+* Relayd and Httpd Mastery; Michael W Lucas
## Self-development and soft-skills books
In random order:
-* The Good Enough Job; Simone Stolzoff; Ebury Edge
-* Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing
-* Time Management for System Administrators; Thomas A. Limoncelli; O'Reilly
-* Stop starting, start finishing; Arne Roock; Lean-Kanban University
-* Getting Things Done; David Allen
+* The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select
+* Psycho-Cybernetics; Maxwell Maltz; Perigee Books
+* Slow Productivity; Cal Newport; Penguin Random House
+* The Complete Software Developer's Career Guide; John Sonmez; Unabridged Audiobook
+* The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books
* Soft Skills; John Sommez; Manning Publications
+* Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press
+* 101 Essays that change the way you think; Brianna Wiest; Audible
+* So Good They Can't Ignore You; Cal Newport; Business Plus
+* The Power of Now; Eckhard Tolle; Yellow Kite
+* The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK
* The Bullet Journal Method; Ryder Carroll; Fourth Estate
+* Getting Things Done; David Allen
+* Never Split the Difference; Chris Voss, Tahl Raz; Random House Business
+* Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne
+* Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)
* Digital Minimalism; Cal Newport; Portofolio Penguin
-* 101 Essays that change the way you think; Brianna Wiest; Audible
-* The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)
* Ultralearning; Scott Young; Thorsons
-* The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books
-* Atomic Habits; James Clear; Random House Business
-* Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press
-* Eat That Frog!; Brian Tracy; Hodder Paperbacks
-* Influence without Authority; A. Cohen, D. Bradford; Wiley
-* Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion
-* Ultralearning; Anna Laurent; Self-published via Amazon
-* Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne
* Staff Engineer: Leadership beyond the management track; Will Larson; Audible
-* Never Split the Difference; Chris Voss, Tahl Raz; Random House Business
-* Slow Productivity; Cal Newport; Penguin Random House
-* Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)
-* So Good They Can't Ignore You; Cal Newport; Business Plus
-* The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select
-* The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK
-* The Power of Now; Eckhard Tolle; Yellow Kite
-* Eat That Frog; Brian Tracy
-* The Complete Software Developer's Career Guide; John Sonmez; Unabridged Audiobook
+* Ultralearning; Anna Laurent; Self-published via Amazon
+* Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing
+* Deep Work; Cal Newport; Piatkus
+* Time Management for System Administrators; Thomas A. Limoncelli; O'Reilly
* The Obstacle Is The Way; Ryan Holiday; Profile Books Ltd
-* Psycho-Cybernetics; Maxwell Maltz; Perigee Books
* The Joy of Missing Out; Christina Crook; New Society Publishers
-* Deep Work; Cal Newport; Piatkus
+* Influence without Authority; A. Cohen, D. Bradford; Wiley
+* Eat That Frog!; Brian Tracy; Hodder Paperbacks
+* Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion
+* Eat That Frog; Brian Tracy
+* The Good Enough Job; Simone Stolzoff; Ebury Edge
+* Stop starting, start finishing; Arne Roock; Lean-Kanban University
+* The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)
+* Atomic Habits; James Clear; Random House Business
=> ../notes/index.gmi Here are notes of mine for some of the books
@@ -139,30 +139,30 @@ In random order:
Some of these were in-person with exams; others were online learning lectures only. In random order:
-* Ultimate Go Programming; Bill Kennedy; O'Reilly Online
-* Red Hat Certified System Administrator; Course + certification (Although I had the option, I decided not to take the next course as it is more effective to self learn what I need)
-* Apache Tomcat Best Practises; 3-day on-site training
+* Protocol buffers; O'Reilly Online
* MySQL Deep Dive Workshop; 2-day on-site training
-* The Well-Grounded Rubyist Video Edition; David. A. Black; O'Reilly Online
-* Functional programming lecture; Remote University of Hagen
-* Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online
-* F5 Loadbalancers Training; 2-day on-site training; F5, Inc.
-* Scripting Vim; Damian Conway; O'Reilly Online
+* Linux Security and Isolation APIs Training; Michael Kerrisk; 3-day on-site training
* AWS Immersion Day; Amazon; 1-day interactive online training
-* Protocol buffers; O'Reilly Online
-* Structure and Interpretation of Computer Programs; Harold Abelson and more...;
+* F5 Loadbalancers Training; 2-day on-site training; F5, Inc.
* Developing IaC with Terraform (with Live Lessons); O'Reilly Online
-* Linux Security and Isolation APIs Training; Michael Kerrisk; 3-day on-site training
-* The Ultimate Kubernetes Bootcamp; School of Devops; O'Reilly Online
* Cloud Operations on AWS - Learn how to configure, deploy, maintain, and troubleshoot your AWS environments; 3-day online live training with labs; Amazon
+* The Well-Grounded Rubyist Video Edition; David. A. Black; O'Reilly Online
+* The Ultimate Kubernetes Bootcamp; School of Devops; O'Reilly Online
+* Structure and Interpretation of Computer Programs; Harold Abelson and more...;
+* Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online
+* Functional programming lecture; Remote University of Hagen
+* Ultimate Go Programming; Bill Kennedy; O'Reilly Online
+* Red Hat Certified System Administrator; Course + certification (Although I had the option, I decided not to take the next course as it is more effective to self learn what I need)
+* Apache Tomcat Best Practises; 3-day on-site training
+* Scripting Vim; Damian Conway; O'Reilly Online
## Technical guides
These are not whole books, but guides (smaller or larger) which I found very useful. in random order:
-* How CPUs work at https://cpu.land
-* Advanced Bash-Scripting Guide
* Raku Guide at https://raku.guide
+* Advanced Bash-Scripting Guide
+* How CPUs work at https://cpu.land
## Podcasts
@@ -170,55 +170,55 @@ These are not whole books, but guides (smaller or larger) which I found very use
In random order:
-* Maintainable
-* Deep Questions with Cal Newport
-* The ProdCast (Google SRE Podcast)
+* BSD Now
* Dev Interrupted
-* The Changelog Podcast(s)
-* Fallthrough [Golang]
* Fork Around And Find Out
-* The Pragmatic Engineer Podcast
* Backend Banter
+* The ProdCast (Google SRE Podcast)
+* The Changelog Podcast(s)
+* Deep Questions with Cal Newport
* Cup o' Go [Golang]
-* BSD Now
+* Maintainable
+* The Pragmatic Engineer Podcast
+* Fallthrough [Golang]
* Hidden Brain
### Podcasts I liked
I liked them but am not listening to them anymore. The podcasts have either "finished" (no more episodes) or I stopped listening to them due to time constraints or a shift in my interests.
+* Java Pub House
* FLOSS weekly
-* CRE: Chaosradio Express [german]
* Modern Mentor
-* Go Time (predecessor of fallthrough)
-* Java Pub House
* Ship It (predecessor of Fork Around And Find Out)
+* CRE: Chaosradio Express [german]
+* Go Time (predecessor of fallthrough)
## Newsletters I like
This is a mix of tech and non-tech newsletters I am subscribed to. In random order:
-* Ruby Weekly
-* Golang Weekly
-* The Valuable Dev
-* Andreas Brandhorst Newsletter (Sci-Fi author)
-* The Pragmatic Engineer
* Changelog News
-* Monospace Mentor
* VK Newsletter
-* Applied Go Weekly Newsletter
+* Golang Weekly
+* Andreas Brandhorst Newsletter (Sci-Fi author)
* Register Spill
+* Applied Go Weekly Newsletter
* The Imperfectionist
+* Monospace Mentor
+* Ruby Weekly
+* The Pragmatic Engineer
* byteSizeGo
+* The Valuable Dev
## Magazines I like(d)
This is a mix of tech I like(d). I may not be a current subscriber, but now and then, I buy an issue. In random order:
-* Linux User
-* LWN (online only)
* Linux Magazine
+* Linux User
* freeX (not published anymore)
+* LWN (online only)
# Formal education
diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml
index 0a678298..c0aeafce 100644
--- a/gemfeed/atom.xml
+++ b/gemfeed/atom.xml
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
- <updated>2025-04-04T23:21:02+03:00</updated>
+ <updated>2025-04-04T23:23:08+03:00</updated>
<title>foo.zone feed</title>
<subtitle>To be in the .zone!</subtitle>
<link href="gemini://foo.zone/gemfeed/atom.xml" rel="self" />
@@ -20,6 +20,8 @@
<div xmlns="http://www.w3.org/1999/xhtml">
<h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-4-rocky-linux-bhyve-vms'>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</h1><br />
<br />
+<span class='quote'>Published at 2025-04-04T23:21:01+03:00</span><br />
+<br />
<span>This is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
diff --git a/gemfeed/helix-gpt.log b/gemfeed/helix-gpt.log
index 8808371e..07f77c15 100644
--- a/gemfeed/helix-gpt.log
+++ b/gemfeed/helix-gpt.log
@@ -12328,3 +12328,187 @@ APP 2025-04-04T20:20:33.592Z --> sent request | {"jsonrpc":"2.0","method":"textD
APP 2025-04-04T20:20:43.108Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+APP 2025-04-04T20:22:12.532Z --> received didChange | language: markdown | contentVersion: 2350 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:12.613Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":6,"line":250},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":490}
+
+APP 2025-04-04T20:22:12.742Z --> received didChange | language: markdown | contentVersion: 2351 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:12.814Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, w edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2351}
+
+APP 2025-04-04T20:22:12.814Z --> skipping because content is stale
+
+APP 2025-04-04T20:22:12.814Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:12.814Z --> sent request | {"jsonrpc":"2.0","id":490,"result":{"isIncomplete":false,"items":[]}}
+
+APP 2025-04-04T20:22:12.911Z --> received didChange | language: markdown | contentVersion: 2352 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:13.163Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":8,"line":250},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":491}
+
+APP 2025-04-04T20:22:13.364Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2352}
+
+APP 2025-04-04T20:22:13.365Z --> calling completion event
+
+APP 2025-04-04T20:22:13.365Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":250,"character":0},"end":{"line":251,"character":0}}}]
+
+APP 2025-04-04T20:22:13.365Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":250,"character":0},"end":{"line":251,"character":0}},"source":"helix-gpt"}]}}
+
+APP 2025-04-04T20:22:13.365Z --> copilot | completion request
+
+APP 2025-04-04T20:22:13.366Z --> fetch | /v1/engines/copilot-codex/completions
+
+APP 2025-04-04T20:22:13.843Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200
+
+APP 2025-04-04T20:22:13.843Z --> completion hints: | can SSH into the VMs without a password.
+
+APP 2025-04-04T20:22:13.844Z --> sent request | {"jsonrpc":"2.0","id":491,"result":{"isIncomplete":false,"items":[{"label":"can SSH into the VMs without a password.","kind":1,"preselect":true,"detail":"can SSH into the VMs without a password.","insertText":"can SSH into the VMs without a password.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":250,"character":48},"end":{"line":250,"character":199}}}]}]}}
+
+APP 2025-04-04T20:22:13.844Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:15.050Z --> received didChange | language: markdown | contentVersion: 2353 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:15.225Z --> received didChange | language: markdown | contentVersion: 2354 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:23.367Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:45.243Z --> received didChange | language: markdown | contentVersion: 2355 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:45.334Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":90,"line":456},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":492}
+
+APP 2025-04-04T20:22:45.475Z --> received didChange | language: markdown | contentVersion: 2356 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:45.535Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, w skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2356}
+
+APP 2025-04-04T20:22:45.535Z --> skipping because content is stale
+
+APP 2025-04-04T20:22:45.536Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:45.536Z --> sent request | {"jsonrpc":"2.0","id":492,"result":{"isIncomplete":false,"items":[]}}
+
+APP 2025-04-04T20:22:45.636Z --> received didChange | language: markdown | contentVersion: 2357 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:45.887Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":92,"line":456},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":493}
+
+APP 2025-04-04T20:22:46.089Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, we skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2357}
+
+APP 2025-04-04T20:22:46.090Z --> calling completion event
+
+APP 2025-04-04T20:22:46.090Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}}}]
+
+APP 2025-04-04T20:22:46.090Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}},"source":"helix-gpt"}]}}
+
+APP 2025-04-04T20:22:46.090Z --> copilot | completion request
+
+APP 2025-04-04T20:22:46.091Z --> fetch | /v1/engines/copilot-codex/completions
+
+APP 2025-04-04T20:22:46.515Z --> received didChange | language: markdown | contentVersion: 2358 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:46.609Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200
+
+APP 2025-04-04T20:22:46.610Z --> completion hints: | need to build it from source. The following commands are executed on the Rocky Linux VMs:
+
+APP 2025-04-04T20:22:46.610Z --> sent request | {"jsonrpc":"2.0","id":493,"result":{"isIncomplete":false,"items":[{"label":"need to build it from source. The following commands are executed on the Rocky Linux VMs:","kind":1,"preselect":true,"detail":"need to build it from source. The following commands are executed on the Rocky Linux VMs:","insertText":"need to build it from source. The following commands are executed on the Rocky Linux VMs:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":456,"character":181},"end":{"line":456,"character":200}}}]}]}}
+
+APP 2025-04-04T20:22:46.610Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:46.627Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":93,"line":456},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":494}
+
+APP 2025-04-04T20:22:46.829Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, we this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2358}
+
+APP 2025-04-04T20:22:46.829Z --> calling completion event
+
+APP 2025-04-04T20:22:46.830Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}}}]
+
+APP 2025-04-04T20:22:46.830Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}},"source":"helix-gpt"}]}}
+
+APP 2025-04-04T20:22:46.830Z --> copilot | completion request
+
+APP 2025-04-04T20:22:46.830Z --> fetch | /v1/engines/copilot-codex/completions
+
+APP 2025-04-04T20:22:47.067Z --> received didChange | language: markdown | contentVersion: 2359 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:47.116Z --> received didChange | language: markdown | contentVersion: 2360 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:47.223Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200
+
+APP 2025-04-04T20:22:47.224Z --> completion hints: | have to use the `sillybench` benchmark again. I installed it in the same way as before and ran the same benchmarks as above.
+
+APP 2025-04-04T20:22:47.225Z --> sent request | {"jsonrpc":"2.0","id":494,"result":{"isIncomplete":false,"items":[{"label":"have to use the `sillybench` benchmark again. I installed it in the same way as before and ran the same benchmarks as above.","kind":1,"preselect":true,"detail":"have to use the `sillybench` benchmark again. I installed it in the same way as before and ran the same benchmarks as above.","insertText":"have to use the `sillybench` benchmark again. I installed it in the same way as before and ran the same benchmarks as above.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":456,"character":217},"end":{"line":456,"character":227}}}]}]}}
+
+APP 2025-04-04T20:22:47.225Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:47.284Z --> received didChange | language: markdown | contentVersion: 2361 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:47.670Z --> received didChange | language: markdown | contentVersion: 2362 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:47.788Z --> received didChange | language: markdown | contentVersion: 2363 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:47.794Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":98,"line":456},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":495}
+
+APP 2025-04-04T20:22:47.995Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, we skip this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2363}
+
+APP 2025-04-04T20:22:47.995Z --> calling completion event
+
+APP 2025-04-04T20:22:47.995Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}}}]
+
+APP 2025-04-04T20:22:47.995Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":456,"character":0},"end":{"line":457,"character":0}},"source":"helix-gpt"}]}}
+
+APP 2025-04-04T20:22:47.995Z --> copilot | completion request
+
+APP 2025-04-04T20:22:47.996Z --> fetch | /v1/engines/copilot-codex/completions
+
+APP 2025-04-04T20:22:48.452Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200
+
+APP 2025-04-04T20:22:48.453Z --> completion hints: | this benchmark for now. If you know how to install it on Rocky Linux, please let me know.
+
+APP 2025-04-04T20:22:48.453Z --> sent request | {"jsonrpc":"2.0","id":495,"result":{"isIncomplete":false,"items":[{"label":"this benchmark for now. If you know how to install it on Rocky Linux, please let me know.","kind":1,"preselect":true,"detail":"this benchmark for now. If you know how to install it on Rocky Linux, please let me know.","insertText":"this benchmark for now. If you know how to install it on Rocky Linux, please let me know.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":456,"character":187},"end":{"line":456,"character":197}}}]}]}}
+
+APP 2025-04-04T20:22:48.453Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:54.225Z --> received didChange | language: markdown | contentVersion: 2364 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:54.334Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":184,"line":460},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":496}
+
+APP 2025-04-04T20:22:54.464Z --> received didChange | language: markdown | contentVersion: 2365 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:54.536Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, we skip this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, w can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2365}
+
+APP 2025-04-04T20:22:54.536Z --> skipping because content is stale
+
+APP 2025-04-04T20:22:54.537Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:54.537Z --> sent request | {"jsonrpc":"2.0","id":496,"result":{"isIncomplete":false,"items":[]}}
+
+APP 2025-04-04T20:22:54.633Z --> received didChange | language: markdown | contentVersion: 2366 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl
+
+APP 2025-04-04T20:22:54.883Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":186,"line":460},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":497}
+
+APP 2025-04-04T20:22:55.084Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, we edit the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, we skip this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, we can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2366}
+
+APP 2025-04-04T20:22:55.084Z --> calling completion event
+
+APP 2025-04-04T20:22:55.084Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":460,"character":0},"end":{"line":461,"character":0}}}]
+
+APP 2025-04-04T20:22:55.085Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":460,"character":0},"end":{"line":461,"character":0}},"source":"helix-gpt"}]}}
+
+APP 2025-04-04T20:22:55.085Z --> copilot | completion request
+
+APP 2025-04-04T20:22:55.085Z --> fetch | /v1/engines/copilot-codex/completions
+
+APP 2025-04-04T20:22:55.722Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200
+
+APP 2025-04-04T20:22:55.724Z --> completion hints: | can run various workloads, including k3s, which is a lightweight Kubernetes distribution. This setup allows us to keep our FreeBSD hosts clean and focused on their primary tasks while still benefiting from the flexibility of virtualization.
+
+APP 2025-04-04T20:22:55.724Z --> sent request | {"jsonrpc":"2.0","id":497,"result":{"isIncomplete":false,"items":[{"label":"can run various workloads, including k3s, which is a lightweight Kubernetes distribution. This setup allows us to keep our FreeBSD hosts clean and focused on their primary tasks while still benefiting from the flexibility of virtualization.","kind":1,"preselect":true,"detail":"can run various workloads, including k3s, which is a lightweight Kubernetes distribution. This setup allows us to keep our FreeBSD hosts clean and focused on their primary tasks while still benefiting from the flexibility of virtualization.","insertText":"can run various workloads, including k3s, which is a lightweight Kubernetes distribution. This setup allows us to keep our FreeBSD hosts clean and focused on their primary tasks while still benefiting from the flexibility of virtualization.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":460,"character":426},"end":{"line":460,"character":545}}}]}]}}
+
+APP 2025-04-04T20:22:55.724Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:56.090Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:56.832Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:22:57.996Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
+APP 2025-04-04T20:23:05.086Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}}
+
diff --git a/index.gmi b/index.gmi
index 5ee9dd50..6390b7ef 100644
--- a/index.gmi
+++ b/index.gmi
@@ -1,6 +1,6 @@
# Hello!
-> This site was generated at 2025-04-04T23:21:02+03:00 by `Gemtexter`
+> This site was generated at 2025-04-04T23:23:08+03:00 by `Gemtexter`
Welcome to the ...
diff --git a/uptime-stats.gmi b/uptime-stats.gmi
index 5adfb18c..b97f9ced 100644
--- a/uptime-stats.gmi
+++ b/uptime-stats.gmi
@@ -1,6 +1,6 @@
# My machine uptime stats
-> This site was last updated at 2025-04-04T23:21:02+03:00
+> This site was last updated at 2025-04-04T23:23:08+03:00
The following stats were collected via `uptimed` on all of my personal computers over many years and the output was generated by `guprecords`, the global uptime records stats analyser of mine.