2025-06-07T10:30:12+03:00 foo.zone feed To be in the .zone! gemini://foo.zone/ 'A Monk's Guide to Happiness' book notes gemini://foo.zone/gemfeed/2025-06-07-a-monks-guide-to-happiness-book-notes.gmi 2025-06-07T10:30:11+03:00 Paul Buetow aka snonux paul@dev.buetow.org These are my personal book notes from Gelong Thubten's 'A Monk's Guide to Happiness: Meditation in the 21st century.' They are for my own reference, but I hope they might be useful to you as well.

"A Monk's Guide to Happiness" book notes



These are my personal book notes from Gelong Thubten's "A Monk's Guide to Happiness: Meditation in the 21st century." They are for my own reference, but I hope they might be useful to you as well.

Table of Contents




Understanding Happiness



  • Happiness is a skill we can train.
  • Happiness is not about accomplishing goals, as that would be in the future.
  • Feel free now. No urge about past and future.
  • We can learn to produce our own happiness independently of physical needs. When we walk in a park, how do we feel? We can train to reproduce that feeling independently.

The Role of Meditation



  • Meditation is not about clearing your mind. A busy mind has nothing to do with interfering with your meditation.
  • Our problem is that we need to detect that awareness. Meditation connects us with awareness. Awareness is freedom.
  • We can let the mind be and don't care about the thoughts. It will have benefits for your life. It will protect you from all kinds of stress.
  • Better meditate with open eyes so you don't associate it with the dark. You will also be able to be in a meditation state of mind outside of the meditation session.
  • Have a baseline for time to build up discipline.
  • We don't need to do anything about stress, just take a step back.

Managing Thoughts and Emotions



  • Our flow of emotions is really just habits. That can be changed through training, e.g., meditation training.
  • A part of the mind recognises that we are sad or angry. That part is not sad or angry by itself, obviously. So we can escape to that part of the mind, be the observer, and not draw in the constant flow of emotions and thoughts.
  • Let the front and back doors of your house open, and let the thoughts come in and leave. Just don't serve them tea. This once said, a great Zen master.
  • Thoughts are friends and not enemies.
  • Thoughts help the meditation as they make us notice that we wandered off, and therefore, we strengthen the reflection.

Practice and Discipline



  • The importance of habits to practice mindfulness. Bring mindfulness into the daily practice.
  • Integrating short moments of mindfulness during the day is the fast track to happiness. Start off with small tasks, e.g. while washing your hands.
  • Have many small doses of mindfulness and don't prolong as otherwise, your mind will revolt.
  • Have a small moment of mindfulness when you wake up and go to sleep.
  • Practice staying fully present in an uncomfortable situation and without judgement.
  • Don't become two persons who never meet: the meditator and the not meditator. So integrate mindfulness during the day too.

Perspectives on Relationships and Interactions



  • Who is the opponent? The other person. The things he said or our reactions to things? Forgiveness is a high form of compassion.
  • Understand the suffering of the person who "hurt" us. Where is the aggressor really coming from?
  • People who are stressed or unhappy do and say things they wouldn't have said have done otherwise. Acting under anger is like being influenced by alcohol.
  • People don't have a masterplan to destroy others, even if it seems so. They are under strong bad influence by themselves. Something terrible happened to them. Revenge makes no sense.
  • Be grateful for people "trying" to hurt you as they help you to practice your path.

Reflective Questions



  • Why do I do all the things I do? What do I try to achieve?
  • What am I doing about that?
  • Is it working?
  • What are the real causes of happiness and suffering?
  • What about meditation? How does that address the situation?

Miscellaneous Guidelines



  • Posture is important as the mind and body are connected.
  • Don't use music, so you don't rely on music to change your state of mind. Similar regular guided meditation. Guided meditation is good for learning a technique, but you should not rely on another voice.
  • You are not trying to relax. Relaxing and trying are two different things.
  • When you love everything, even the bad things happening to you, then you are invincible.
  • Happiness is all in your mind. As if you flip a switch there.
  • Digging for answers will never end. It will always cause more material to dig.

If happiness is a mental issue. Clearly, the best time is spent training your mind in your free time and don't always be busy with other things. E.g. meditation, or think about the benefits of meditation. All that we do in our free time is search for happiness. Are the things we do actually working? There is always something around the corner...

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network gemini://foo.zone/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.gmi 2025-05-11T11:35:57+03:00 Paul Buetow aka snonux paul@dev.buetow.org This is the fifth blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.

f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network



Published at 2025-05-11T11:35:57+03:00

This is the fifth blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.

I will post a new entry every month or so (there are too many other side projects for more frequent updates — I bet you can understand).

These are all the posts so far:

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network (You are currently reading this)

f3s logo

ChatGPT generated logo.

Let's begin...

Table of Contents




Introduction



By default, traffic within my home LAN, including traffic inside a k3s cluster, is not encrypted. While it resides in the "secure" home LAN, adopting a zero-trust policy means encryption is still preferable to ensure confidentiality and security. So we decide to secure all the traffic of all f3s participating hosts by building a mesh network of all participating hosts:

Full mesh network

Whereas f0, f1, and f2 are the FreeBSD base hosts, r0, r1, and r2 are the Rocky Linux Bhyve VMs, and blowfish and fishfinger are two OpenBSD systems running on the internet (as mentioned in the first blog of this series—these systems are already built; in fact, this very blog is served by those OpenBSD systems).

As we can see from the graph, it is a true full-mesh network, where every host has a VPN tunnel to every other host. The benefit is that we do not need to route traffic through intermediate hosts (significantly simplifying the routing configuration). However, the downside is that there is some overhead in configuring and managing all the tunnels.

For simplicity, we also establish VPN tunnels between f0 <-> r0, f1 <-> r1, and f2 <-> r2. Technically, this wouldn't be strictly required since the VMs rN are running on the hosts fN, and no network traffic is leaving the box. However, it simplifies the configuration as we don't have to account for exceptions, and we are going to automate the mesh network configuration anyway (read on).

Expected traffic flow



The traffic is expected to flow between the host groups through the mesh network as follows:

  • fN <-> rN: The traffic between the FreeBSD hosts and the Rocky Linux VMs will be routed through the VPN tunnels for persistent storage. In a later post in this series, we will set up an NFS server on the fN hosts.
  • fN <-> blowfish,fishfinger: The traffic between the FreeBSD hosts and the OpenBSD host blowfish,fishfinger will be routed through the VPN tunnels for management. We may want to log in via the internet to set it up remotely. The VPN tunnel will also be used for monitoring purposes.
  • rN <-> blowfish,fishfinger: The traffic between the Rocky Linux VMs and the OpenBSD host blowfish,fishfinger will be routed through the VPN tunnels for usage traffic. Since k3s will be running on the rN hosts, the OpenBSD servers will route the traffic through relayd to the services running in Kubernetes.
  • fN <-> fM: The traffic between the FreeBSD hosts may be later used for data replication for the NFS storage.
  • rN <-> rM: The traffic between the Rocky Linux VMs will later be used by the k3s cluster itself, as every rN will be a Kubernetes worker node.
  • blowfish <-> fishfinger: The traffic between the OpenBSD hosts isn't strictly required for this setup, but I set it up anyway for future use cases.

We won't cover all the details in this blog post, as we only focus on setting up the Mesh network in this blog post. Subsequent posts in this series will cover the other details.

Deciding on WireGuard



I have decided to use WireGuard as the VPN technology for this purpose.

WireGuard is a lightweight, modern, and secure VPN protocol designed for simplicity, speed, and strong cryptography. It is an excellent choice due to its minimal codebase, ease of configuration, high performance, and robust security, utilizing state-of-the-art encryption standards. WireGuard is supported on various operating systems, and its implementations are compatible with each other. Therefore, establishing WireGuard VPN tunnels between FreeBSD, Linux, and OpenBSD is seamless. This cross-platform availability makes it suitable for setups like the one described in this blog series.

We could have used Tailscale for an easy to set up and manage the WireGuard network, but the benefits of creating our own mesh network are:

  • Learning about WireGuard configuration details
  • Have full control over the setup
  • Don't rely on an external provider like Tailscale (even if some of the components are open-source)
  • Have even more fun along the way
  • WireGuard is easy to configure on my target operating systems and, therefore, easier to maintain in the long run.
  • There are no official Tailscale packages available for OpenBSD and FreeBSD. However, getting Tailscale running on these systems is still possible, though some tinkering would be required. Instead, we use that tinkering time to set up WireGuard tunnels ourselves.

https://en.wikipedia.org/wiki/WireGuard
https://www.wireguard.com/
https://tailscale.com/

WireGuard Logo

Base configuration



In the following, we prepare the base configuration for the WireGuard mesh network. We will use a similar configuration on all participating hosts, with the exception of the host IP addresses and the private keys.

FreeBSD



On the FreeBSD hosts f0, f1 and f2, similar as last time, first, we bring the system up to date:

paul@f0:~ % doas freebsd-update fetch
paul@f0:~ % doas freebsd-update install
paul@f0:~ % doas shutdown -r now
..
..
paul@f0:~ % doas pkg update
paul@f0:~ % doas pkg upgrade
paul@f0:~ % reboot

Next, we install wireguard-tools and configure the WireGuard service:

paul@f0:~ % doas pkg install wireguard-tools
paul@f0:~ % doas sysrc wireguard_interfaces=wg0
wireguard_interfaces:  -> wg0
paul@f0:~ % doas sysrc wireguard_enable=YES
wireguard_enable:  -> YES
paul@f0:~ % doas mkdir -p /usr/local/etc/wireguard
paul@f0:~ % doas touch /usr/local/etc/wireguard/wg0.conf
paul@f0:~ % doas service wireguard start
paul@f0:~ % doas wg show
interface: wg0
  public key: L+V9o0fNYkMVKNqsX7spBzD/9oSvxM/C7ZCZX1jLO3Q=
  private key: (hidden)
  listening port: 20246

We now have the WireGuard up and running, but it is not yet in any functional configuration. We will come back to that later.

Next, we add all the participating WireGuard IPs to the hosts file. This is only convenience, so we don't have to manage an external DNS server for this:

paul@f0:~ % cat <<END | doas tee -a /etc/hosts

192.168.1.120 r0 r0.lan r0.lan.buetow.org
192.168.1.121 r1 r1.lan r1.lan.buetow.org
192.168.1.122 r2 r2.lan r2.lan.buetow.org

192.168.2.130 f0.wg0 f0.wg0.wan.buetow.org
192.168.2.131 f1.wg0 f1.wg0.wan.buetow.org
192.168.2.132 f2.wg0 f2.wg0.wan.buetow.org

192.168.2.120 r0.wg0 r0.wg0.wan.buetow.org
192.168.2.121 r1.wg0 r1.wg0.wan.buetow.org
192.168.2.122 r2.wg0 r2.wg0.wan.buetow.org

192.168.2.110 blowfish.wg0 blowfish.wg0.wan.buetow.org
192.168.2.111 fishfinger.wg0 fishfinger.wg0.wan.buetow.org
END

As you can see, 192.168.1.0/24 is the network used in my LAN (with the fN and rN hosts) and 192.168.2.0/24 is the network used for the WireGuard mesh network. The wg0 interface will be used for all WireGuard traffic.

Rocky Linux



We bring the Rocky Linux VMs up to date as well with the following:

[root@r0 ~] dnf update -y
[root@r0 ~] reboot

Next, we prepare WireGuard on them. Same as on the FreeBSD hosts, we will only prepare WireGuard without any useful configuration yet:

[root@r0 ~] dnf install -y wireguard-tools
[root@r0 ~] mkdir -p /etc/wireguard
[root@r0 ~] touch /etc/wireguard/wg0.conf
[root@r0 ~] systemctl enable wg-quick@wg0.service
[root@r0 ~] systemctl start wg-quick@wg0.service
[root@r0 ~] systemctl disable firewalld

We also update the hosts file accordingly:

[root@r0 ~] cat <<END >>/etc/hosts

192.168.1.130 f0 f0.lan f0.lan.buetow.org
192.168.1.131 f1 f1.lan f1.lan.buetow.org
192.168.1.132 f2 f2.lan f2.lan.buetow.org

192.168.2.130 f0.wg0 f0.wg0.wan.buetow.org
192.168.2.131 f1.wg0 f1.wg0.wan.buetow.org
192.168.2.132 f2.wg0 f2.wg0.wan.buetow.org

192.168.2.120 r0.wg0 r0.wg0.wan.buetow.org
192.168.2.121 r1.wg0 r1.wg0.wan.buetow.org
192.168.2.122 r2.wg0 r2.wg0.wan.buetow.org

192.168.2.110 blowfish.wg0 blowfish.wg0.wan.buetow.org
192.168.2.111 fishfinger.wg0 fishfinger.wg0.wan.buetow.org
END

Unfortunately, the SELinux policy on Rocky Linux blocks WireGuard's operation. By making the wireguard_t domain permissive using semanage permissive -a wireguard_t, SELinux will no longer enforce restrictions for WireGuard, allowing it to work as intended:

[root@r0 ~] dnf install -y policycoreutils-python-utils
[root@r0 ~] semanage permissive -a wireguard_t
[root@r0 ~] reboot

https://github.com/angristan/wireguard-install/discussions/499

OpenBSD



Other than the FreeBSD and Rocky Linux hosts involved, my OpenBSD hosts (blowfish and fishfinger, which are running at OpenBSD Amsterdam and Hetzner on the internet) have been running already for longer, so I can't provide you with the "from scratch" installation details here. In the following, we will only focus on the additional configuration needed to set up WireGuard:

blowfish$ doas pkg_add wireguard-tools
blowfish$ doas mkdir /etc/wireguard
blowfish$ doas touch /etc/wireguard/wg0.conf
blowsish$ cat <<END | doas tee /etc/hostname.wg0
inet 192.168.2.110 255.255.255.0 NONE
up
!/usr/local/bin/wg setconf wg0 /etc/wireguard/wg0.conf
END

Note that on blowfish, we configure 192.168.2.110 here in the hostname.wg, and on fishfinger, we configure 192.168.2.111. Those are the IP addresses of the WireGuard interfaces on those hosts.

And here, we also update the hosts file accordingly:

blowfish$ cat <<END | doas tee -a /etc/hosts

192.168.2.130 f0.wg0 f0.wg0.wan.buetow.org
192.168.2.131 f1.wg0 f1.wg0.wan.buetow.org
192.168.2.132 f2.wg0 f2.wg0.wan.buetow.org

192.168.2.120 r0.wg0 r0.wg0.wan.buetow.org
192.168.2.121 r1.wg0 r1.wg0.wan.buetow.org
192.168.2.122 r2.wg0 r2.wg0.wan.buetow.org

192.168.2.110 blowfish.wg0 blowfish.wg0.wan.buetow.org
192.168.2.111 fishfinger.wg0 fishfinger.wg0.wan.buetow.org
END

WireGuard configuration



So far, we have only started WireGuard on all participating hosts without any useful configuration. This means that no VPN tunnel has been established yet between any of the hosts.

Example wg0.conf



Generally speaking, a wg0.conf looks like this (example from f0 host):

[Interface]
# f0.wg0.wan.buetow.org
Address = 192.168.2.130
PrivateKey = **************************
ListenPort = 56709

[Peer]
# f1.lan.buetow.org as f1.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.131/32
Endpoint = 192.168.1.131:56709
# No KeepAlive configured

[Peer]
# f2.lan.buetow.org as f2.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.132/32
Endpoint = 192.168.1.132:56709
# No KeepAlive configured

[Peer]
# r0.lan.buetow.org as r0.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.120/32
Endpoint = 192.168.1.120:56709
# No KeepAlive configured

[Peer]
# r1.lan.buetow.org as r1.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.121/32
Endpoint = 192.168.1.121:56709
# No KeepAlive configured

[Peer]
# r2.lan.buetow.org as r2.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.122/32
Endpoint = 192.168.1.122:56709
# No KeepAlive configured

[Peer]
# blowfish.buetow.org as blowfish.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.110/32
Endpoint = 23.88.35.144:56709
PersistentKeepalive = 25

[Peer]
# fishfinger.buetow.org as fishfinger.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.111/32
Endpoint = 46.23.94.99:56709
PersistentKeepalive = 25

Whereas there are two main sections. One is [Interface], which configures the current host (here: f0):

  • Address: Local virtual IP address on the WireGuard interface.
  • PrivateKey: Private key for this node.
  • ListenPort: Port on which this WireGuard interface listens for incoming connections.

And in the following, there is one [Peer] section for every peer node on the mesh network:

  • PublicKey: The public key of the remote peer is used to authenticate their identity.
  • PresharedKey: An optional symmetric key is used to enhance security (used in addition to PublicKey).
  • AllowedIPs: IPs or subnets routed through this peer (traffic is allowed to/from these IPs).
  • Endpoint: The public IP:port combination of the remote peer for connection.
  • PersistentKeepalive: Keeps the tunnel alive by sending periodic packets; used for NAT traversal.

NAT traversal and keepalive



As all participating hosts, except for blowfish and fishfinger (which are on the internet), are behind a NAT gateway (my home router), we need to use PersistentKeepalive to establish and maintain the VPN tunnel from the LAN to the internet because:

By default, WireGuard tries to be as silent as possible when not being used; it is not a chatty protocol. For the most part, it only transmits data when a peer wishes to send packets. When it's not being asked to send packets, it stops sending packets until it is asked again. In the majority of configurations, this works well. However, when a peer is behind NAT or a firewall, it might wish to be able to receive incoming packets even when it is not sending any packets. Because NAT and stateful firewalls keep track of "connections", if a peer behind NAT or a firewall wishes to receive incoming packets, he must keep the NAT/firewall mapping valid, by periodically sending keepalive packets. This is called persistent keepalives. When this option is enabled, a keepalive packet is sent to the server endpoint once every interval seconds. A sensible interval that works with a wide variety of firewalls is 25 seconds. Setting it to 0 turns the feature off, which is the default, since most users will not need this, and it makes WireGuard slightly more chatty. This feature may be specified by adding the PersistentKeepalive = field to a peer in the configuration file, or setting persistent-keepalive at the command line. If you don't need this feature, don't enable it. But if you're behind NAT or a firewall and you want to receive incoming connections long after network traffic has gone silent, this option will keep the "connection" open in the eyes of NAT.

That's why you see PersistentKeepAlive = 25 in the blowfish and fishfinger peer configurations. This means that every 25 seconds, a keep-alive signal is sent over the tunnel to maintain its connection. If the tunnel is not yet established, it will be created within 25 seconds latest.

Without this, we might never have a VPN tunnel open, as the systems in the LAN may not actively attempt to contact blowfish and fishfinger on their own. In fact, the opposite would likely occur, with the traffic flowing inward instead of outward (this is beyond the scope of this blog post but will be covered in a later post in this series!).

Preshared key



In a WireGuard configuration, the PSK (preshared key) is an optional additional layer of symmetric encryption used alongside the standard public key cryptography. It is a shared secret known to both peers that enhances security by requiring an attacker to compromise both the private keys and the PSK to decrypt communication. While optional, using a PSK is better as it strengthens the cryptographic security, mitigating risks of potential vulnerabilities in the key exchange process.

So, because it's better, we are using it.

Mesh network generator



Manually generating wg0.conf files for every peer in a mesh network setup is cumbersome because each peer requires its own unique public/private key pair and a preshared key for each VPN tunnel (resulting in 29 preshared keys for 8 hosts). This complexity scales almost exponentially with the number of peers as the relationships between all peers must be explicitly defined, including their unique configurations such as AllowedIPs and Endpoint and optional settings like PersistentKeepalive. Automating the process ensures consistency, reduces human error, saves considerable time, and allows for centralized management of configuration files.

Instead, a script can handle key generation, coordinate relationships, and generate all necessary configuration files simultaneously, making it scalable and far less error-prone.

I have written a Ruby script wireguardmeshgenerator.rb to do this for our purposes:

https://codeberg.org/snonux/wireguardmeshgenerator

I use Fedora Linux as my main driver on my personal Laptop, so the script was developed and tested only on Fedora Linux. However, it should also work on other Linux and Unix-like systems.

To set up the mesh generator on Fedora Linux, we run the following:

> git clone https://codeberg.org/snonux/wireguardmeshgenerator
> cd ./wireguardmeshgenerator
> bundle install
> sudo dnf install -y wireguard-tools

This assumes that Ruby and the bundler gem are already installed. If not, refer to the docs of your distribution.

wireguardmeshgenerator.yaml



The file wireguardmeshgenerator.yaml configures the mesh generator script.

---
hosts:
  f0:
    os: FreeBSD
    ssh:
      user: paul
      conf_dir: /usr/local/etc/wireguard
      sudo_cmd: doas
      reload_cmd: service wireguard reload
    lan:
      domain: 'lan.buetow.org'
      ip: '192.168.1.130'
    wg0:
      domain: 'wg0.wan.buetow.org'
      ip: '192.168.2.130'
  f1:
    os: FreeBSD
    ssh:
      user: paul
      conf_dir: /usr/local/etc/wireguard
      sudo_cmd: doas
      reload_cmd: service wireguard reload
    lan:
      domain: 'lan.buetow.org'
      ip: '192.168.1.131'
    wg0:
      domain: 'wg0.wan.buetow.org'
      ip: '192.168.2.131'
  f2:
    os: FreeBSD
    ssh:
      user: paul
      conf_dir: /usr/local/etc/wireguard
      sudo_cmd: doas
      reload_cmd: service wireguard reload
    lan:
      domain: 'lan.buetow.org'
      ip: '192.168.1.132'
    wg0:
      domain: 'wg0.wan.buetow.org'
      ip: '192.168.2.132'
  r0:
    os: Linux
    ssh:
      user: root
      conf_dir: /etc/wireguard
      sudo_cmd:
      reload_cmd: systemctl reload wg-quick@wg0.service
    lan:
      domain: 'lan.buetow.org'
      ip: '192.168.1.120'
    wg0:
      domain: 'wg0.wan.buetow.org'
      ip: '192.168.2.120'
  r1:
    os: Linux
    ssh:
      user: root
      conf_dir: /etc/wireguard
      sudo_cmd:
      reload_cmd: systemctl reload wg-quick@wg0.service
    lan:
      domain: 'lan.buetow.org'
      ip: '192.168.1.121'
    wg0:
      domain: 'wg0.wan.buetow.org'
      ip: '192.168.2.121'
  r2:
    os: Linux
    ssh:
      user: root
      conf_dir: /etc/wireguard
      sudo_cmd:
      reload_cmd: systemctl reload wg-quick@wg0.service
    lan:
      domain: 'lan.buetow.org'
      ip: '192.168.1.122'
    wg0:
      domain: 'wg0.wan.buetow.org'
      ip: '192.168.2.122'
  blowfish:
    os: OpenBSD
    ssh:
      user: rex
      conf_dir: /etc/wireguard
      sudo_cmd: doas
      reload_cmd: sh /etc/netstart wg0
    internet:
      domain: 'buetow.org'
      ip: '23.88.35.144'
    wg0:
      domain: 'wg0.wan.buetow.org'
      ip: '192.168.2.110'
  fishfinger:
    os: OpenBSD
    ssh:
      user: rex
      conf_dir: /etc/wireguard
      sudo_cmd: doas
      reload_cmd: sh /etc/netstart wg0
    internet:
      domain: 'buetow.org'
      ip: '46.23.94.99'
    wg0:
      domain: 'wg0.wan.buetow.org'
      ip: '192.168.2.111'

The file specifies details such as SSH user settings, configuration directories, sudo or reload commands, and IP/domain assignments for both internal LAN-facing interfaces and WireGuard (wg0) interfaces. Each host is assigned specific roles, including internal participants and publicly accessible nodes with internet-facing IPs, enabling the creation of a fully connected mesh VPN.

wireguardmeshgenerator.rb overview



The wireguardmeshgenerator.rb script consists of the following base classes:

  • KeyTool: Manages WireGuard key generation and retrieval. It ensures the presence of public/private key pairs and preshared keys (PSKs). If keys are missing, it generates them using the wg tool. It provides methods to read the public/private keys and retrieve or generate a PSK for communication with a peer. The keys are stored in a temp directory on the system from where the generator is run.
  • PeerSnippet: A Struct representing the configuration for a single WireGuard peer in the mesh. Based on the provided attributes and configuration, it generates the peer's WireGuard configuration, including public key, PSK, allowed IPs, endpoint, and keepalive settings.
  • WireguardConfig: This function generates WireGuard configuration files for the specified host in the mesh network. It includes the [Interface] section for the host itself and the [Peer] sections for all other peers. It can also clean up generated files and directories and create the required directory structure for storing configuration files locally on the system from which the script is run.
  • InstallConfig: Handles uploading, installing, and restarting the WireGuard service on remote hosts using SSH and SCP. It ensures the configuration file is uploaded to the remote machine, the necessary directories are present and correctly configured, and the WireGuard service reloads with the new configuration.

At the end (if you want to see the code for the stuff listed above, go to the Git repo and have a look), we glue it all together in this block:

begin
  options = { hosts: [] }
  OptionParser.new do |opts|
    opts.banner = 'Usage: wireguardmeshgenerator.rb [options]'
    opts.on('--generate', 'Generate Wireguard configs') do
      options[:generate] = true
    end
    opts.on('--install', 'Install Wireguard configs') do
      options[:install] = true
    end
    opts.on('--clean', 'Clean Wireguard configs') do
      options[:clean] = true
    end
    opts.on('--hosts=HOSTS', 'Comma separated hosts to configure') do |hosts|
      options[:hosts] = hosts.split(',')
    end
  end.parse!

  conf = YAML.load_file('wireguardmeshgenerator.yaml').freeze
  conf['hosts'].keys.select { options[:hosts].empty? || options[:hosts].include?(_1) }
               .each do |host|
    # Generate Wireguard configuration for the host reload!
    WireguardConfig.new(host, conf['hosts']).generate! if options[:generate]
    # Install Wireguard configuration for the host.
    InstallConfig.new(host, conf['hosts']).upload!.install!.reload! if options[:install]
    # Clean Wireguard configuration for the host.
    WireguardConfig.new(host, conf['hosts']).clean! if options[:clean]
  end
rescue StandardError => e
  puts "Error: #{e.message}"
  puts e.backtrace.join("\n")
  exit 2
end

And we also have a Rakefile:

task :generate do
  ruby 'wireguardmeshgenerator.rb', '--generate'
end

task :clean do
  ruby 'wireguardmeshgenerator.rb', '--clean'
end

task :install do
  ruby 'wireguardmeshgenerator.rb', '--install'
end

task default: :generate


Invoking the mesh network generator



Generating the wg0.conf files and keys



To generate everything (the wg0.conf of all participating hosts, including all keys involved), we run the following:

> rake generate
/usr/bin/ruby wireguardmeshgenerator.rb --generate
Generating dist/f0/etc/wireguard/wg0.conf
Generating dist/f1/etc/wireguard/wg0.conf
Generating dist/f2/etc/wireguard/wg0.conf
Generating dist/r0/etc/wireguard/wg0.conf
Generating dist/r1/etc/wireguard/wg0.conf
Generating dist/r2/etc/wireguard/wg0.conf
Generating dist/blowfish/etc/wireguard/wg0.conf
Generating dist/fishfinger/etc/wireguard/wg0.conf

It generated all the wg0.conf files listed in the output, plus those keys:

> find keys/ -type f
keys/f0/priv.key
keys/f0/pub.key
keys/psk/f0_f1.key
keys/psk/f0_f2.key
keys/psk/f0_r0.key
keys/psk/f0_r1.key
keys/psk/f0_r2.key
keys/psk/blowfish_f0.key
keys/psk/f0_fishfinger.key
keys/psk/f1_f2.key
keys/psk/f1_r0.key
keys/psk/f1_r1.key
keys/psk/f1_r2.key
keys/psk/blowfish_f1.key
keys/psk/f1_fishfinger.key
keys/psk/f2_r0.key
keys/psk/f2_r1.key
keys/psk/f2_r2.key
keys/psk/blowfish_f2.key
keys/psk/f2_fishfinger.key
keys/psk/r0_r1.key
keys/psk/r0_r2.key
keys/psk/blowfish_r0.key
keys/psk/fishfinger_r0.key
keys/psk/r1_r2.key
keys/psk/blowfish_r1.key
keys/psk/fishfinger_r1.key
keys/psk/blowfish_r2.key
keys/psk/fishfinger_r2.key
keys/psk/blowfish_fishfinger.key
keys/f1/priv.key
keys/f1/pub.key
keys/f2/priv.key
keys/f2/pub.key
keys/r0/priv.key
keys/r0/pub.key
keys/r1/priv.key
keys/r1/pub.key
keys/r2/priv.key
keys/r2/pub.key
keys/blowfish/priv.key
keys/blowfish/pub.key
keys/fishfinger/priv.key
keys/fishfinger/pub.key

Those keys are embedded in the resulting wg0.conf, so later, we only need to install the wg0.conf files and not all the keys individually.

Installing the wg0.conf files



Uploading the wg0.conf files to the participating hosts and reloading WireGuard on them is then just a matter of executing (this expects, that all participating hosts are up and running):

> rake install
/usr/bin/ruby wireguardmeshgenerator.rb --install
Uploading dist/f0/etc/wireguard/wg0.conf to f0.lan.buetow.org:.
Installing Wireguard config on f0
Uploading cmd.sh to f0.lan.buetow.org:.
+ [ ! -d /usr/local/etc/wireguard ]
+ doas chmod 700 /usr/local/etc/wireguard
+ doas mv -v wg0.conf /usr/local/etc/wireguard
wg0.conf -> /usr/local/etc/wireguard/wg0.conf
+ doas chmod 644 /usr/local/etc/wireguard/wg0.conf
+ rm cmd.sh
Reloading Wireguard on f0
Uploading cmd.sh to f0.lan.buetow.org:.
+ doas service wireguard reload
+ rm cmd.sh
Uploading dist/f1/etc/wireguard/wg0.conf to f1.lan.buetow.org:.
Installing Wireguard config on f1
Uploading cmd.sh to f1.lan.buetow.org:.
+ [ ! -d /usr/local/etc/wireguard ]
+ doas chmod 700 /usr/local/etc/wireguard
+ doas mv -v wg0.conf /usr/local/etc/wireguard
wg0.conf -> /usr/local/etc/wireguard/wg0.conf
+ doas chmod 644 /usr/local/etc/wireguard/wg0.conf
+ rm cmd.sh
Reloading Wireguard on f1
Uploading cmd.sh to f1.lan.buetow.org:.
+ doas service wireguard reload
+ rm cmd.sh
Uploading dist/f2/etc/wireguard/wg0.conf to f2.lan.buetow.org:.
Installing Wireguard config on f2
Uploading cmd.sh to f2.lan.buetow.org:.
+ [ ! -d /usr/local/etc/wireguard ]
+ doas chmod 700 /usr/local/etc/wireguard
+ doas mv -v wg0.conf /usr/local/etc/wireguard
wg0.conf -> /usr/local/etc/wireguard/wg0.conf
+ doas chmod 644 /usr/local/etc/wireguard/wg0.conf
+ rm cmd.sh
Reloading Wireguard on f2
Uploading cmd.sh to f2.lan.buetow.org:.
+ doas service wireguard reload
+ rm cmd.sh
Uploading dist/r0/etc/wireguard/wg0.conf to r0.lan.buetow.org:.
Installing Wireguard config on r0
Uploading cmd.sh to r0.lan.buetow.org:.
+ '[' '!' -d /etc/wireguard ']'
+ chmod 700 /etc/wireguard
+ mv -v wg0.conf /etc/wireguard
renamed 'wg0.conf' -> '/etc/wireguard/wg0.conf'
+ chmod 644 /etc/wireguard/wg0.conf
+ rm cmd.sh
Reloading Wireguard on r0
Uploading cmd.sh to r0.lan.buetow.org:.
+ systemctl reload wg-quick@wg0.service
+ rm cmd.sh
Uploading dist/r1/etc/wireguard/wg0.conf to r1.lan.buetow.org:.
Installing Wireguard config on r1
Uploading cmd.sh to r1.lan.buetow.org:.
+ '[' '!' -d /etc/wireguard ']'
+ chmod 700 /etc/wireguard
+ mv -v wg0.conf /etc/wireguard
renamed 'wg0.conf' -> '/etc/wireguard/wg0.conf'
+ chmod 644 /etc/wireguard/wg0.conf
+ rm cmd.sh
Reloading Wireguard on r1
Uploading cmd.sh to r1.lan.buetow.org:.
+ systemctl reload wg-quick@wg0.service
+ rm cmd.sh
Uploading dist/r2/etc/wireguard/wg0.conf to r2.lan.buetow.org:.
Installing Wireguard config on r2
Uploading cmd.sh to r2.lan.buetow.org:.
+ '[' '!' -d /etc/wireguard ']'
+ chmod 700 /etc/wireguard
+ mv -v wg0.conf /etc/wireguard
renamed 'wg0.conf' -> '/etc/wireguard/wg0.conf'
+ chmod 644 /etc/wireguard/wg0.conf
+ rm cmd.sh
Reloading Wireguard on r2
Uploading cmd.sh to r2.lan.buetow.org:.
+ systemctl reload wg-quick@wg0.service
+ rm cmd.sh
Uploading dist/blowfish/etc/wireguard/wg0.conf to blowfish.buetow.org:.
Installing Wireguard config on blowfish
Uploading cmd.sh to blowfish.buetow.org:.
+ [ ! -d /etc/wireguard ]
+ doas chmod 700 /etc/wireguard
+ doas mv -v wg0.conf /etc/wireguard
wg0.conf -> /etc/wireguard/wg0.conf
+ doas chmod 644 /etc/wireguard/wg0.conf
+ rm cmd.sh
Reloading Wireguard on blowfish
Uploading cmd.sh to blowfish.buetow.org:.
+ doas sh /etc/netstart wg0
+ rm cmd.sh
Uploading dist/fishfinger/etc/wireguard/wg0.conf to fishfinger.buetow.org:.
Installing Wireguard config on fishfinger
Uploading cmd.sh to fishfinger.buetow.org:.
+ [ ! -d /etc/wireguard ]
+ doas chmod 700 /etc/wireguard
+ doas mv -v wg0.conf /etc/wireguard
wg0.conf -> /etc/wireguard/wg0.conf
+ doas chmod 644 /etc/wireguard/wg0.conf
+ rm cmd.sh
Reloading Wireguard on fishfinger
Uploading cmd.sh to fishfinger.buetow.org:.
+ doas sh /etc/netstart wg0
+ rm cmd.sh

Re-generating mesh and installing the wg0.conf files again



The mesh network can be re-generated and re-installed as follows:

> rake clean
> rake generate
> rake install

That would also delete and re-generate all the keys involved.

Happy WireGuard-ing



All is set up now. E.g. on f0:

paul@f0:~ % doas wg show
interface: wg0
  public key: Jm6YItMt94++dIeOyVi1I9AhNt2qQcryxCZezoX7X2Y=
  private key: (hidden)
  listening port: 56709

peer: 8PvGZH1NohHpZPVJyjhctBX9xblsNvYBhpg68FsFcns=
  preshared key: (hidden)
  endpoint: 46.23.94.99:56709
  allowed ips: 192.168.2.111/32
  latest handshake: 1 minute, 46 seconds ago
  transfer: 124 B received, 1.75 KiB sent
  persistent keepalive: every 25 seconds

peer: Xow+d3qVXgUMk4pcRSQ6Fe+vhYBa3VDyHX/4jrGoKns=
  preshared key: (hidden)
  endpoint: 23.88.35.144:56709
  allowed ips: 192.168.2.110/32
  latest handshake: 1 minute, 52 seconds ago
  transfer: 124 B received, 1.60 KiB sent
  persistent keepalive: every 25 seconds

peer: s3e93XoY7dPUQgLiVO4d8x/SRCFgEew+/wP7+zwgehI=
  preshared key: (hidden)
  endpoint: 192.168.1.120:56709
  allowed ips: 192.168.2.120/32

peer: 2htXdNcxzpI2FdPDJy4T4VGtm1wpMEQu1AkQHjNY6F8=
  preshared key: (hidden)
  endpoint: 192.168.1.131:56709
  allowed ips: 192.168.2.131/32

peer: 0Y/H20W8YIbF7DA1sMwMacLI8WS9yG+1/QO7m2oyllg=
  preshared key: (hidden)
  endpoint: 192.168.1.122:56709
  allowed ips: 192.168.2.122/32

peer: Hhy9kMPOOjChXV2RA5WeCGs+J0FE3rcNPDw/TLSn7i8=
  preshared key: (hidden)
  endpoint: 192.168.1.121:56709
  allowed ips: 192.168.2.121/32

peer: SlGVsACE1wiaRoGvCR3f7AuHfRS+1jjhS+YwEJ2HvF0=
  preshared key: (hidden)
  endpoint: 192.168.1.132:56709
  allowed ips: 192.168.2.132/32

All the hosts are pingable as well, e.g.:

paul@f0:~ % foreach peer ( f1 f2 r0 r1 r2 blowfish fishfinger )
foreach? ping -c2 $peer.wg0
foreach? echo
foreach? end
PING f1.wg0 (192.168.2.131): 56 data bytes
64 bytes from 192.168.2.131: icmp_seq=0 ttl=64 time=0.334 ms
64 bytes from 192.168.2.131: icmp_seq=1 ttl=64 time=0.260 ms

--- f1.wg0 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.260/0.297/0.334/0.037 ms

PING f2.wg0 (192.168.2.132): 56 data bytes
64 bytes from 192.168.2.132: icmp_seq=0 ttl=64 time=0.323 ms
64 bytes from 192.168.2.132: icmp_seq=1 ttl=64 time=0.303 ms

--- f2.wg0 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.303/0.313/0.323/0.010 ms

PING r0.wg0 (192.168.2.120): 56 data bytes
64 bytes from 192.168.2.120: icmp_seq=0 ttl=64 time=0.716 ms
64 bytes from 192.168.2.120: icmp_seq=1 ttl=64 time=0.406 ms

--- r0.wg0 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.406/0.561/0.716/0.155 ms

PING r1.wg0 (192.168.2.121): 56 data bytes
64 bytes from 192.168.2.121: icmp_seq=0 ttl=64 time=0.639 ms
64 bytes from 192.168.2.121: icmp_seq=1 ttl=64 time=0.629 ms

--- r1.wg0 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.629/0.634/0.639/0.005 ms

PING r2.wg0 (192.168.2.122): 56 data bytes
64 bytes from 192.168.2.122: icmp_seq=0 ttl=64 time=0.569 ms
64 bytes from 192.168.2.122: icmp_seq=1 ttl=64 time=0.479 ms

--- r2.wg0 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.479/0.524/0.569/0.045 ms

PING blowfish.wg0 (192.168.2.110): 56 data bytes
64 bytes from 192.168.2.110: icmp_seq=0 ttl=255 time=35.745 ms
64 bytes from 192.168.2.110: icmp_seq=1 ttl=255 time=35.481 ms

--- blowfish.wg0 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 35.481/35.613/35.745/0.132 ms

PING fishfinger.wg0 (192.168.2.111): 56 data bytes
64 bytes from 192.168.2.111: icmp_seq=0 ttl=255 time=33.992 ms
64 bytes from 192.168.2.111: icmp_seq=1 ttl=255 time=33.751 ms

--- fishfinger.wg0 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 33.751/33.872/33.992/0.120 ms

Note that the loop above is a tcsh loop, the default shell used in FreeBSD. Of course, all other peers can ping their peers as well!

After the first ping, VPN tunnels now also show handshakes and the amount of data transferred through them:

paul@f0:~ % doas wg show
interface: wg0
  public key: Jm6YItMt94++dIeOyVi1I9AhNt2qQcryxCZezoX7X2Y=
  private key: (hidden)
  listening port: 56709

peer: 0Y/H20W8YIbF7DA1sMwMacLI8WS9yG+1/QO7m2oyllg=
  preshared key: (hidden)
  endpoint: 192.168.1.122:56709
  allowed ips: 192.168.2.122/32
  latest handshake: 10 seconds ago
  transfer: 440 B received, 532 B sent

peer: Hhy9kMPOOjChXV2RA5WeCGs+J0FE3rcNPDw/TLSn7i8=
  preshared key: (hidden)
  endpoint: 192.168.1.121:56709
  allowed ips: 192.168.2.121/32
  latest handshake: 12 seconds ago
  transfer: 440 B received, 564 B sent

peer: s3e93XoY7dPUQgLiVO4d8x/SRCFgEew+/wP7+zwgehI=
  preshared key: (hidden)
  endpoint: 192.168.1.120:56709
  allowed ips: 192.168.2.120/32
  latest handshake: 14 seconds ago
  transfer: 440 B received, 564 B sent

peer: SlGVsACE1wiaRoGvCR3f7AuHfRS+1jjhS+YwEJ2HvF0=
  preshared key: (hidden)
  endpoint: 192.168.1.132:56709
  allowed ips: 192.168.2.132/32
  latest handshake: 17 seconds ago
  transfer: 472 B received, 564 B sent

peer: Xow+d3qVXgUMk4pcRSQ6Fe+vhYBa3VDyHX/4jrGoKns=
  preshared key: (hidden)
  endpoint: 23.88.35.144:56709
  allowed ips: 192.168.2.110/32
  latest handshake: 55 seconds ago
  transfer: 472 B received, 596 B sent
  persistent keepalive: every 25 seconds

peer: 8PvGZH1NohHpZPVJyjhctBX9xblsNvYBhpg68FsFcns=
  preshared key: (hidden)
  endpoint: 46.23.94.99:56709
  allowed ips: 192.168.2.111/32
  latest handshake: 55 seconds ago
  transfer: 472 B received, 596 B sent
  persistent keepalive: every 25 seconds

peer: 2htXdNcxzpI2FdPDJy4T4VGtm1wpMEQu1AkQHjNY6F8=
  preshared key: (hidden)
  endpoint: 192.168.1.131:56709
  allowed ips: 192.168.2.131/32

Conclusion



Having a mesh network on our hosts is great for securing all the traffic between them for our future k3s setup. A self-managed WireGuard mesh network is better than Tailscale as it eliminates reliance on a third party and provides full control over the configuration. It reduces unnecessary abstraction and "magic," enabling easier debugging and ensuring full ownership of our network.

I look forward to the next blog post in this series. We may start setting up k3s or take a first look at the NFS server (for persistent storage) side of things. I hope you liked all the posts so far in this series.

Other *BSD-related posts:

2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network (You are currently reading this)
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-04-01 KISS high-availability with OpenBSD
2024-01-13 One reason why I love OpenBSD
2022-10-30 Installing DTail on OpenBSD
2022-07-30 Let's Encrypt with OpenBSD and Rex
2016-04-09 Jails and ZFS with Puppet on FreeBSD

E-Mail your comments to paul@nospam.buetow.org

Back to the main site
Terminal multiplexing with `tmux` - Fish edition gemini://foo.zone/gemfeed/2025-05-02-terminal-multiplexing-with-tmux-fish-edition.gmi 2025-05-02T00:09:23+03:00 Paul Buetow aka snonux paul@dev.buetow.org This is the Fish shell edition of the same post (but for Z-Shell) of mine from last year:

Terminal multiplexing with tmux - Fish edition



Published at 2025-05-02T00:09:23+03:00

This is the Fish shell edition of the same post (but for Z-Shell) of mine from last year:

./2024-06-23-terminal-multiplexing-with-tmux.html

Tmux (Terminal Multiplexer) is a powerful, terminal-based tool that manages multiple terminal sessions within a single window. Here are some of its primary features and functionalities:

  • Session management
  • Window and Pane management
  • Persistent Workspace
  • Customization

https://github.com/tmux/tmux/wiki

            _______                           s
           |.-----.|                           s
           || Tmux||                          s
           ||_.-._||       |\   \\\\__     o          s
           `--)-(--`       | \_/    o \    o          s
          __[=== o]__      > _   (( <_  oo            s
         |:::::::::::|\    | / \__+___/               s
   jgs   `-=========-`()   |/     |/                  s
       mod. by Paul B.

Table of Contents




Before continuing...



Before continuing to read this post, I encourage you to get familiar with Tmux first (unless you already know the basics). You can go through the official getting started guide:

https://github.com/tmux/tmux/wiki/Getting-Started

I can also recommend this book (this is the book I got started with with Tmux):

https://pragprog.com/titles/bhtmux2/tmux-2/

Over the years, I have built a couple of shell helper functions to optimize my workflows. Tmux is extensively integrated into my daily workflows (personal and work). I had colleagues asking me about my Tmux config and helper scripts for Tmux several times. It would be neat to blog about it so that everyone interested in it can make a copy of my configuration and scripts.

The configuration and scripts in this blog post are only the non-work-specific parts. There are more helper scripts, which I only use for work (and aren't really useful outside of work due to the way servers and clusters are structured there).

Tmux is highly configurable, and I think I am only scratching the surface of what is possible with it. Nevertheless, it may still be useful for you. I also love that Tmux is part of the OpenBSD base system!

Shell aliases



Since last week, I am playing a bit with the Fish shell. As a result, I also converted all my tmux helper scripts (mentioned in this blog post) from Z-Shell to Fish.

https://fishshell.com

For the most common Tmux commands I use, I have created the following shell aliases:

alias tn 'tmux::new'
alias ta 'tmux::attach'
alias tx 'tmux::remote'
alias ts 'tmux::search'
alias tssh 'tmux::cluster_ssh'
alias tm tmux
alias tl 'tmux list-sessions'
alias foo 'tmux::new foo'
alias bar 'tmux::new bar'
alias baz 'tmux::new baz'

Note all tmux::...; those are custom shell functions doing certain things, and they aren't part of the Tmux distribution. But let's run through every aliases one by one.

The first two are pretty straightforward. tm is simply a shorthand for tmux, so I have to type less, and tl lists all Tmux sessions that are currently open. No magic here.

The tn alias - Creating a new session



The tn alias is referencing this function:

# Create new session and if alread exists attach to it
function tmux::new
    set -l session $argv[1]
    _tmux::cleanup_default
    if test -z "$session"
        tmux::new (string join "" T (date +%s))
    else
        tmux new-session -d -s $session
        tmux -2 attach-session -t $session || tmux -2 switch-client -t $session
    end
end

There is a lot going on here. Let's have a detailed look at what it is doing.

First, a Tmux session name can be passed to the function as a first argument. That session name is only optional. Without it, Tmux will select a session named (string join "" T (date +%s)) as a default. Which is T followed by the UNIX epoch, e.g. T1717133796.

Cleaning up default sessions automatically



Note also the call to _tmux::cleanup_default; it would clean up all already opened default sessions if they aren't attached. Those sessions were only temporary, and I had too many flying around after a while. So, I decided to auto-delete the sessions if they weren't attached. If I want to keep sessions around, I will rename them with the Tmux command prefix-key $. This is the cleanup function:

function _tmux::cleanup_default
    tmux list-sessions | string match -r '^T.*: ' | string match -v -r attached | string split ':' | while read -l s
        echo "Killing $s"
        tmux kill-session -t "$s"
    end
end

The cleanup function kills all open Tmux sessions that haven't been renamed properly yet—but only if they aren't attached (e.g., don't run in the foreground in any terminal). Cleaning them up automatically keeps my Tmux sessions as neat and tidy as possible.

Renaming sessions



Whenever I am in a temporary session (named T....), I may decide that I want to keep this session around. I have to rename the session to prevent the cleanup function from doing its thing. That's, as mentioned already, easily accomplished with the standard prefix-key $ Tmux command.

The ta alias - Attaching to a session



This alias refers to the following function, which tries to attach to an already-running Tmux session.

function tmux::attach
    set -l session $argv[1]
    if test -z "$session"
        tmux attach-session || tmux::new
    else
        tmux attach-session -t $session || tmux::new $session
    end
end

If no session is specified (as the argument of the function), it will try to attach to the first open session. If no Tmux server is running, it will create a new one with tmux::new. Otherwise, with a session name given as the argument, it will attach to it. If unsuccessful (e.g., the session doesn't exist), it will be created and attached to.

The tr alias - For a nested remote session



This SSHs into the remote server specified and then, remotely on the server itself, starts a nested Tmux session. So we have one Tmux session on the local computer and, inside of it, an SSH connection to a remote server with a Tmux session running again. The benefit of this is that, in case my network connection breaks down, the next time I connect, I can continue my work on the remote server exactly where I left off. The session name is the name of the server being SSHed into. If a session like this already exists, it simply attaches to it.

function tmux::remote
    set -l server $argv[1]
    tmux new -s $server "ssh -A -t $server 'tmux attach-session || tmux'" || tmux attach-session -d -t $server
end

Change of the Tmux prefix for better nesting



To make nested Tmux sessions work smoothly, one must change the Tmux prefix key locally or remotely. By default, the Tmux prefix key is Ctrl-b, so Ctrl-b $, for example, renames the current session. To change the prefix key from the standard Ctrl-b to, for example, Ctrl-g, you must add this to the tmux.conf:

set-option -g prefix C-g

This way, when I want to rename the remote Tmux session, I have to use Ctrl-g $, and when I want to rename the local Tmux session, I still have to use Ctrl-b $. In my case, I have this deployed to all remote servers through a configuration management system (out of scope for this blog post).

There might also be another way around this (without reconfiguring the prefix key), but that is cumbersome to use, as far as I remember.

The ts alias - Searching sessions with fuzzy finder



Despite the fact that with _tmux::cleanup_default, I don't leave a huge mess with trillions of Tmux sessions flying around all the time, at times, it can become challenging to find exactly the session I am currently interested in. After a busy workday, I often end up with around twenty sessions on my laptop. This is where fuzzy searching for session names comes in handy, as I often don't remember the exact session names.

function tmux::search
    set -l session (tmux list-sessions | fzf | cut -d: -f1)
    if test -z "$TMUX"
        tmux attach-session -t $session
    else
        tmux switch -t $session
    end
end

All it does is list all currently open sessions in fzf, where one of them can be searched and selected through fuzzy find, and then either switch (if already inside a session) to the other session or attach to the other session (if not yet in Tmux).

You must install the fzf command on your computer for this to work. This is how it looks like:

Tmux session fuzzy finder

The tssh alias - Cluster SSH replacement



Before I used Tmux, I was a heavy user of ClusterSSH, which allowed me to log in to multiple servers at once in a single terminal window and type and run commands on all of them in parallel.

https://github.com/duncs/clusterssh

However, since I started using Tmux, I retired ClusterSSH, as it came with the benefit that Tmux only needs to be run in the terminal, whereas ClusterSSH spawned terminal windows, which aren't easily portable (e.g., from a Linux desktop to macOS). The tmux::cluster_ssh function can have N arguments, where:

  • ...the first argument will be the session name (see tmux::tssh_from_argument helper function), and all remaining arguments will be server hostnames/FQDNs to connect to simultaneously.
  • ...or, the first argument is a file name, and the file contains a list of hostnames/FQDNs (see tmux::ssh_from_file helper function)

This is the function definition behind the tssh alias:

function tmux::cluster_ssh
    if test -f "$argv[1]"
        tmux::tssh_from_file $argv[1]
        return
    end
    tmux::tssh_from_argument $argv
end

This function is just a wrapper around the more complex tmux::tssh_from_file and tmux::tssh_from_argument functions, as you have learned already. Most of the magic happens there.

The tmux::tssh_from_argument helper



This is the most magic helper function we will cover in this post. It looks like this:

function tmux::tssh_from_argument
    set -l session $argv[1]
    set first_server_or_container $argv[2]
    set remaining_servers $argv[3..-1]
    if test -z "$first_server_or_container"
        set first_server_or_container $session
    end

    tmux new-session -d -s $session (_tmux::connect_command "$first_server_or_container")
    if not tmux list-session | grep "^$session:"
        echo "Could not create session $session"
        return 2
    end
    for server_or_container in $remaining_servers
        tmux split-window -t $session "tmux select-layout tiled; $(_tmux::connect_command "$server_or_container")"
    end
    tmux setw -t $session synchronize-panes on
    tmux -2 attach-session -t $session || tmux -2 switch-client -t $session
end

It expects at least two arguments. The first argument is the session name to create for the clustered SSH session. All other arguments are server hostnames or FQDNs to which to connect. The first one is used to make the initial session. All remaining ones are added to that session with tmux split-window -t $session.... At the end, we enable synchronized panes by default, so whenever you type, the commands will be sent to every SSH connection, thus allowing the neat ClusterSSH feature to run commands on multiple servers simultaneously. Once done, we attach (or switch, if already in Tmux) to it.

Sometimes, I don't want the synchronized panes behavior and want to switch it off temporarily. I can do that with prefix-key p and prefix-key P after adding the following to my local tmux.conf:

bind-key p setw synchronize-panes off
bind-key P setw synchronize-panes on

The tmux::tssh_from_file helper



This one sets the session name to the file name and then reads a list of servers from that file, passing the list of servers to tmux::tssh_from_argument as the arguments. So, this is a neat little wrapper that also enables me to open clustered SSH sessions from an input file.

function tmux::tssh_from_file
    set -l serverlist $argv[1]
    set -l session (basename $serverlist | cut -d. -f1)
    tmux::tssh_from_argument $session (awk '{ print $1 }' $serverlist | sed 's/.lan./.lan/g')
end

tssh examples



To open a new session named fish and log in to 4 remote hosts, run this command (Note that it is also possible to specify the remote user):

$ tssh fish blowfish.buetow.org fishfinger.buetow.org \
    fishbone.buetow.org user@octopus.buetow.org

To open a new session named manyservers, put many servers (one FQDN per line) into a file called manyservers.txt and simply run:

$ tssh manyservers.txt

Common Tmux commands I use in tssh



These are default Tmux commands that I make heavy use of in a tssh session:

  • Press prefix-key DIRECTION to switch panes. DIRECTION is by default any of the arrow keys, but I also configured Vi keybindings.
  • Press prefix-key <space> to change the pane layout (can be pressed multiple times to cycle through them).
  • Press prefix-key z to zoom in and out of the current active pane.

Copy and paste workflow



As you will see later in this blog post, I have configured a history limit of 1 million items in Tmux so that I can scroll back quite far. One main workflow of mine is to search for text in the Tmux history, select and copy it, and then switch to another window or session and paste it there (e.g., into my text editor to do something with it).

This works by pressing prefix-key [ to enter Tmux copy mode. From there, I can browse the Tmux history of the current window using either the arrow keys or vi-like navigation (see vi configuration later in this blog post) and the Pg-Dn and Pg-Up keys.

I often search the history backwards with prefix-key [ followed by a ?, which opens the Tmux history search prompt.

Once I have identified the terminal text to be copied, I enter visual select mode with v, highlight all the text to be copied (using arrow keys or Vi motions), and press y to yank it (sorry if this all sounds a bit complicated, but Vim/NeoVim users will know this, as it is pretty much how you do it there as well).

For v and y to work, the following has to be added to the Tmux configuration file:

bind-key -T copy-mode-vi 'v' send -X begin-selection
bind-key -T copy-mode-vi 'y' send -X copy-selection-and-cancel

Once the text is yanked, I switch to another Tmux window or session where, for example, a text editor is running and paste the yanked text from Tmux into the editor with prefix-key ]. Note that when pasting into a modal text editor like Vi or Helix, you would first need to enter insert mode before prefix-key ] would paste anything.

Tmux configurations



Some features I have configured directly in Tmux don't require an external shell alias to function correctly. Let's walk line by line through my local ~/.config/tmux/tmux.conf:

source ~/.config/tmux/tmux.local.conf

set-option -g allow-rename off
set-option -g history-limit 100000
set-option -g status-bg '#444444'
set-option -g status-fg '#ffa500'
set-option -s escape-time 0

There's yet to be much magic happening here. I source a tmux.local.conf, which I sometimes use to override the default configuration that comes from the configuration management system. But it is mostly just an empty file, so it doesn't throw any errors on Tmux startup when I don't use it.

I work with many terminal outputs, which I also like to search within Tmux. So, I added a large enough history-limit, enabling me to search backwards in Tmux for any output up to a million lines of text.

Besides changing some colours (personal taste), I also set escape-time to 0, which is just a workaround. Otherwise, my Helix text editor's ESC key would take ages to trigger within Tmux. I am trying to remember the gory details. You can leave it out; if everything works fine for you, leave it out.

The next lines in the configuration file are:

set-window-option -g mode-keys vi
bind-key -T copy-mode-vi 'v' send -X begin-selection
bind-key -T copy-mode-vi 'y' send -X copy-selection-and-cancel

I navigate within Tmux using Vi keybindings, so the mode-keys is set to vi. I use the Helix modal text editor, which is close enough to Vi bindings for simple navigation to feel "native" to me. (By the way, I have been a long-time Vim and NeoVim user, but I eventually switched to Helix. It's off-topic here, but it may be worth another blog post once.)

The two bind-key commands make it so that I can use v and y in copy mode, which feels more Vi-like (as already discussed earlier in this post).

The next set of lines in the configuration file are:

bind-key h select-pane -L
bind-key j select-pane -D
bind-key k select-pane -U
bind-key l select-pane -R

bind-key H resize-pane -L 5
bind-key J resize-pane -D 5
bind-key K resize-pane -U 5
bind-key L resize-pane -R 5

These allow me to use prefix-key h, prefix-key j, prefix-key k, and prefix-key l for switching panes and prefix-key H, prefix-key J, prefix-key K, and prefix-key L for resizing the panes. If you don't know Vi/Vim/NeoVim, the letters hjkl are commonly used there for left, down, up, and right, which is also the same for Helix, by the way.

The next set of lines in the configuration file are:

bind-key c new-window -c '#{pane_current_path}'
bind-key F new-window -n "session-switcher" "tmux list-sessions | fzf | cut -d: -f1 | xargs tmux switch-client -t"
bind-key T choose-tree

The first one is that any new window starts in the current directory. The second one is more interesting. I list all open sessions in the fuzzy finder. I rely heavily on this during my daily workflow to switch between various sessions depending on the task. E.g. from a remote cluster SSH session to a local code editor.

The third one, choose-tree, opens a tree view in Tmux listing all sessions and windows. This one is handy to get a better overview of what is currently running in any local Tmux session. It looks like this (it also allows me to press a hotkey to switch to a particular Tmux window):

Tmux sessiont tree view

The last remaining lines in my configuration file are:

bind-key p setw synchronize-panes off
bind-key P setw synchronize-panes on
bind-key r source-file ~/.config/tmux/tmux.conf \; display-message "tmux.conf reloaded"

We discussed synchronized panes earlier. I use it all the time in clustered SSH sessions. When enabled, all panes (remote SSH sessions) receive the same keystrokes. This is very useful when you want to run the same commands on many servers at once, such as navigating to a common directory, restarting a couple of services at once, or running tools like htop to quickly monitor system resources.

The last one reloads my Tmux configuration on the fly.

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
'When: The Scientific Secrets of Perfect Timing' book notes gemini://foo.zone/gemfeed/2025-04-19-when-book-notes.gmi 2025-04-19T10:26:05+03:00 Paul Buetow aka snonux paul@dev.buetow.org These are my personal book notes from Daniel Pink's 'When: The Scientific Secrets of Perfect Timing.' They are for me, but I hope they might be useful to you too.

"When: The Scientific Secrets of Perfect Timing" book notes



Published at 2025-04-19T10:26:05+03:00

These are my personal book notes from Daniel Pink's "When: The Scientific Secrets of Perfect Timing." They are for me, but I hope they might be useful to you too.

	  __
 (`/\
 `=\/\ __...--~~~~~-._   _.-~~~~~--...__
  `=\/\               \ /               \\
   `=\/                V                 \\
   //_\___--~~~~~~-._  |  _.-~~~~~~--...__\\
  //  ) (..----~~~~._\ | /_.~~~~----.....__\\
 ===( INK )==========\\|//====================
__ejm\___/________dwb`---`______________________

Table of Contents




You are a different kind of organism based on the time of day. For example, school tests show worse results later in the day, especially if there are fewer computers than students available. Every person has a chronotype, such as a late or early peaker, or somewhere in the middle (like most people). You can assess your chronotype here:

Chronotype Assessment

Following your chronotype can lead to more happiness and higher job satisfaction.

Daily Rhythms



Peak, Trough, Rebound (Recovery): Most people experience these periods throughout the day. It's best to "eat the frog" or tackle daunting tasks during the peak. A twin peak exists every day, with mornings and early evenings being optimal for most people. Negative moods follow the opposite pattern, peaking in the afternoon. Light helps adjust but isn't the main driver of our internal clock. Like plants, humans have intrinsic rhythms.

Optimal Task Timing



  • Analytical work requiring sharpness and focus is best at the peak.
  • Creative work is more effective during non-peak times.
  • Biorhythms can sway performance by up to twenty percent.

Exercise Timing



Exercise in the morning to lose weight; you burn up to twenty percent more fat if you exercise before eating. Exercising after eating aids muscle gain, using the energy from the food. Morning exercises elevate mood, with the effect lasting all day. They also make forming a habit easier. The late afternoon is best for athletic performance due to optimal body temperature, reducing injury risk.

Drinking Habits



  • Drink water in the morning to counter mild dehydration upon waking.
  • Delay coffee consumption until cortisol production peaks an hour or 90 minutes after waking. This helps avoid caffeine resistance.
  • For an afternoon boost, have coffee once cortisol levels drop.

Afternoon Challenges ("Bermuda Triangle")



  • Mistakes are more common in hospitals during this period, like incorrect antibiotic subscriptions or missed handwashing.
  • Traffic accidents and unfavorable judge decisions occur more frequently in the afternoon.
  • 2:55 pm is the least productive time of the day.

Breaks and Productivity



Short, restorative breaks enhance performance. Student exam results improved with a half-hour break beforehand. Even micro-breaks can be beneficial—hourly five-minute walking breaks can increase productivity as much as 30-minute walks. Nature-based breaks are more effective than indoor ones, and full detachment in breaks is essential for restoration. Physical activity during breaks boosts concentration and productivity more than long walks do. Complete detachment from work during breaks is critical.

Napping



Short naps (10-20 minutes) significantly enhance mood, alertness, and cognitive performance, improving learning and problem-solving abilities. Napping increases with age, benefiting mood, flow, and overall health. A "nappuccino," or napping after coffee, offers a double boost, as caffeine takes around 25 minutes to kick in.

Scheduling Breaks



  • Track breaks just as you do with tasks—aim for three breaks a day.
  • Every 25 minutes, look away and daydream for 20 seconds, or engage in short exercises.
  • Meditating for even three minutes is a highly effective restorative activity.
  • The "Fresh Start Effect" (e.g., beginning a diet on January 1st or a new week) impacts motivation, as does recognizing progress. At the end of each day, spends two minutes to write down accomplishments.

Final Impressions



- The concluding experience of a vacation significantly influences overall memories.
- Restaurant reviews often hinge on the end of the visit, highlighting extras like wrong bills or additional desserts.
- Considering one's older future self can motivate improvements in the present.

The Midlife U Curve



Life satisfaction tends to dip in midlife, around the forties, but increases around age 54.

Project Management Tips



  • Halfway through a project, there's a concentrated work effort ("Oh Oh Effect"), similar to an alarm when slightly behind schedule.
  • Recognizing daily accomplishments can elevate motivation and satisfaction.

These insights from "When" can guide actions to optimize performance, well-being, and satisfaction across various aspects of life.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2025-06-07 "A Monk's Guide to Happiness" book notes
2025-04-19 "When: The Scientific Secrets of Perfect Timing" book notes (You are currently reading this)
2024-10-24 "Staff Engineer" book notes
2024-07-07 "The Stoic Challenge" book notes
2024-05-01 "Slow Productivity" book notes
2023-11-11 "Mind Management" book notes
2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes
2023-05-06 "The Obstacle is the Way" book notes
2023-04-01 "Never split the difference" book notes
2023-03-16 "The Pragmatic Programmer" book notes

Back to the main site
f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs gemini://foo.zone/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi 2025-04-04T23:21:01+03:00 Paul Buetow aka snonux paul@dev.buetow.org This is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.

f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs



Published at 2025-04-04T23:21:01+03:00

This is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)
2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network

f3s logo

Table of Contents




Introduction



In this blog post, we are going to install the Bhyve hypervisor.

The FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.

https://wiki.freebsd.org/bhyve

Bhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.

Check for POPCNT CPU support



POPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.

To check for POPCNT support, run:

paul@f0:~ % dmesg | grep 'Features2=.*POPCNT'
  Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,
	FMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,
	OSXSAVE,AVX,F16C,RDRAND>

So it's there! All good.

Basic Bhyve setup



For managing the Bhyve VMs, we are using vm-bhyve, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.

https://github.com/churchers/vm-bhyve

The following commands are executed on all three hosts f0, f1, and f2, where re0 is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):

paul@f0:~ % doas pkg install vm-bhyve bhyve-firmware
paul@f0:~ % doas sysrc vm_enable=YES
vm_enable:  -> YES
paul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve
vm_dir:  -> zfs:zroot/bhyve
paul@f0:~ % doas zfs create zroot/bhyve
paul@f0:~ % doas vm init
paul@f0:~ % doas vm switch create public
paul@f0:~ % doas vm switch add public re0

Bhyve stores all it's data in the /bhyve of the zroot ZFS pool:

paul@f0:~ % zfs list | grep bhyve
zroot/bhyve                                   1.74M   453G  1.74M  /zroot/bhyve

For convenience, we also create this symlink:

paul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve


Now, Bhyve is ready to rumble, but no VMs are there yet:

paul@f0:~ % doas vm list
NAME  DATASTORE  LOADER  CPU  MEMORY  VNC  AUTO  STATE

Rocky Linux VMs



As guest VMs I decided to use Rocky Linux.

Using Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades.

Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.

https://rockylinux.org/

ISO download



We're going to install the Rocky Linux from the latest minimal iso:

paul@f0:~ % doas vm iso \
 https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso
/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso        1808 MB 4780 kBps 06m28s
paul@f0:/bhyve % doas vm create rocky

VM configuration



The default Bhyve VM configuration looks like this now:

paul@f0:/bhyve/rocky % cat rocky.conf
loader="bhyveload"
cpu=1
memory=256M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
uuid="1c4655ac-c828-11ef-a920-e8ff1ed71ca0"
network0_mac="58:9c:fc:0d:13:3f"

The uuid and the network0_mac differ for each of the three VMs (the ones being installed on f0, f1 and f2).

But to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run doas vm configure rocky and modified it to:

guest="linux"
loader="uefi"
uefi_vars="yes"
cpu=4
memory=14G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
graphics="yes"
graphics_vga=io
uuid="1c45400b-c828-11ef-8871-e8ff1ed71cac"
network0_mac="58:9c:fc:0d:13:3f"

VM installation



To start the installer from the downloaded ISO, we run:

paul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso
Starting rocky
  * found guest in /zroot/bhyve/rocky
  * booting...

paul@f0:/bhyve/rocky % doas vm list
NAME   DATASTORE  LOADER  CPU  MEMORY  VNC           AUTO  STATE
rocky  default    uefi    4    14G     0.0.0.0:5900  No    Locked (f0.lan.buetow.org)

paul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900
root     bhyve       6079 8   tcp4   *:5900                *:*

Port 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.

Increase of the disk image



By default, the VM disk image is only 20G, which is a bit small for our purposes, so we have to stop the VMs again, run truncate on the image file to enlarge them to 100G, and restart the installation:

paul@f0:/bhyve/rocky % doas vm stop rocky
paul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img
paul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso

Connect to VNC



For the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were vnc://f0:5900, vnc://f1:5900, and vnc://f0:5900.





I primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.





After install



We perform the following steps for all three VMs. In the following, the examples are all executed on f0 (the VM r0 running on f0):

VM auto-start after host reboot



To automatically start the VM on the servers, we add the following to the rc.conf on the FreeBSD hosts:

paul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf
vm_list="rocky"
vm_delay="5"

The vm_delay isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a Yes indicator in the AUTO column.

paul@f0:~ % doas vm list
NAME   DATASTORE  LOADER  CPU  MEMORY  VNC           AUTO     STATE
rocky  default    uefi    4    14G     0.0.0.0:5900  Yes [1]  Running (2063)

Static IP configuration



After that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the three FreeBSD hosts were already in my /etc/hosts file:

192.168.1.130 f0 f0.lan f0.lan.buetow.org
192.168.1.131 f1 f1.lan f1.lan.buetow.org
192.168.1.132 f2 f2.lan f2.lan.buetow.org

For the Rocky VMs, we add those to the FreeBSD host systems as well:

paul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts
192.168.1.120 r0 r0.lan r0.lan.buetow.org
192.168.1.121 r1 r1.lan r1.lan.buetow.org
192.168.1.122 r2 r2.lan r2.lan.buetow.org
END

And we configure the IPs accordingly on the VMs themselves by opening a root shell via SSH to the VMs and entering the following commands on each of the VMs:

[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24
[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1
[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1
[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual
[root@r0 ~] % dnmcli connection down enp0s5
[root@r0 ~] % dnmcli connection up enp0s5
[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org
[root@r0 ~] % cat <<END >>/etc/hosts
192.168.1.120 r0 r0.lan r0.lan.buetow.org
192.168.1.121 r1 r1.lan r1.lan.buetow.org
192.168.1.122 r2 r2.lan r2.lan.buetow.org
END

Whereas:

  • 192.168.1.120 is the IP of the VM itself (here: r0.lan.buetow.org)
  • 192.168.1.1 is the address of my home router, which also does DNS.

Permitting root login



As these VMs aren't directly reachable via SSH from the internet, we enable root login by adding a line with PermitRootLogin yes to /etc/sshd/sshd_config.

Once done, we reboot the VM by running reboot inside the VM to test whether everything was configured and persisted correctly.

After reboot, we copy a public key over. E.g. I did this from my Laptop as follows:

% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done

Then, we edit the /etc/ssh/sshd_config file again on all three VMs and configure PasswordAuthentication no to only allow SSH key authentication from now on.

Install latest updates



[root@r0 ~] % dnf update
[root@r0 ~] % reboot

Stress testing CPU



The aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:

package main

import "testing"

func BenchmarkCPUSilly1(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_ = i * i
	}
}

func BenchmarkCPUSilly2(b *testing.B) {
	var sillyResult float64
	for i := 0; i < b.N; i++ {
		sillyResult += float64(i)
		sillyResult *= float64(i)
		divisor := float64(i) + 1
		if divisor > 0 {
			sillyResult /= divisor
		}
	}
	_ = sillyResult // to avoid compiler optimization
}

You can find the repository here:

https://codeberg.org/snonux/sillybench

Silly FreeBSD host benchmark



To install it on FreeBSD, we run:

paul@f0:~ % doas pkg install git go
paul@f0:~ % mkdir ~/git && cd ~/git && \
  git clone https://codeberg.org/snonux/sillybench && \
  cd sillybench

And to run it:

paul@f0:~/git/sillybench % go version
go version go1.24.1 freebsd/amd64

paul@f0:~/git/sillybench % go test -bench=.
goos: freebsd
goarch: amd64
pkg: codeberg.org/snonux/sillybench
cpu: Intel(R) N100
BenchmarkCPUSilly1-4    1000000000               0.4022 ns/op
BenchmarkCPUSilly2-4    1000000000               0.4027 ns/op
PASS
ok      codeberg.org/snonux/sillybench 0.891s

Silly Rocky Linux VM @ Bhyve benchmark



OK, let's compare this with the Rocky Linux VM running on Bhyve:

[root@r0 ~]# dnf install golang git
[root@r0 ~]# mkdir ~/git && cd ~/git && \
  git clone https://codeberg.org/snonux/sillybench && \
  cd sillybench

And to run it:

[root@r0 sillybench]# go version
go version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64
[root@r0 sillybench]# go test -bench=.
goos: linux
goarch: amd64
pkg: codeberg.org/snonux/sillybench
cpu: Intel(R) N100
BenchmarkCPUSilly1-4    1000000000               0.4347 ns/op
BenchmarkCPUSilly2-4    1000000000               0.4345 ns/op

The Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.

Silly FreeBSD VM @ Bhyve benchmark



But as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.

But here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs (and memory) anyway):

root@freebsd:~/git/sillybench # go test -bench=.
goos: freebsd
goarch: amd64
pkg: codeberg.org/snonux/sillybench
cpu: Intel(R) N100
BenchmarkCPUSilly1      1000000000               0.4273 ns/op
BenchmarkCPUSilly2      1000000000               0.4286 ns/op
PASS
ok      codeberg.org/snonux/sillybench  0.949s

It's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!

Benchmarking with ubench



Let's run another, more sophisticated benchmark using ubench, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running doas pkg install ubench. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with -s, and then let it run at full speed (using all available CPUs in parallel) in the second run.

FreeBSD host ubench benchmark



Single CPU:

paul@f0:~ % doas ubench -s 1
Unix Benchmark Utility v.0.3
Copyright (C) July, 1999 PhysTech, Inc.
Author: Sergei Viznyuk <sv@phystech.com>
http://www.phystech.com/download/ubench.html
FreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64
Ubench Single CPU:   671010 (0.40s)
Ubench Single MEM:  1705237 (0.48s)
-----------------------------------
Ubench Single AVG:  1188123


All CPUs (with all Bhyve VMs stopped):

paul@f0:~ % doas ubench
Unix Benchmark Utility v.0.3
Copyright (C) July, 1999 PhysTech, Inc.
Author: Sergei Viznyuk <sv@phystech.com>
http://www.phystech.com/download/ubench.html
FreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64
Ubench CPU:  2660220
Ubench MEM:  3095182
--------------------
Ubench AVG:  2877701

FreeBSD VM @ Bhyve ubench benchmark



Single CPU:

root@freebsd:~ # ubench -s 1
Unix Benchmark Utility v.0.3
Copyright (C) July, 1999 PhysTech, Inc.
Author: Sergei Viznyuk <sv@phystech.com>
http://www.phystech.com/download/ubench.html
FreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64
Ubench Single CPU:   672792 (0.40s)
Ubench Single MEM:   852757 (0.48s)
-----------------------------------
Ubench Single AVG:   762774

Wow, the CPU in the VM was a tiny bit faster than on the host! So this was probably just a glitch in the matrix. Memory seems slower, though.

All CPUs:

root@freebsd:~ # ubench
Unix Benchmark Utility v.0.3
Copyright (C) July, 1999 PhysTech, Inc.
Author: Sergei Viznyuk <sv@phystech.com>
http://www.phystech.com/download/ubench.html
FreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64
Ubench CPU:  2652857
swap_pager: out of swap space
swp_pager_getswapspace(27): failed
swap_pager: out of swap space
swp_pager_getswapspace(18): failed
Apr  4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory
swp_pager_getswapspace(6): failed
Apr  4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory
Apr  4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory
Apr  4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory
Apr  4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory
Apr  4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory

The multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.

Also, during the benchmark, I noticed the bhyve process on the host was constantly using 399% of the CPU (all 4 CPUs).

  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
 7449 root         14  20    0    14G    78M kqread   2   2:12 399.81% bhyve

Overall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I'm not sure whether to trust it, especially due to the swap errors. Does ubench's memory benchmark use swap space for the memory test? That wouldn't make sense and might explain the difference to some degree, though. Do you have any ideas?

Rocky Linux VM @ Bhyve ubench benchmark



Unfortunately, I wasn't able to find ubench in any of the Rocky Linux repositories. So, I skipped this test.

Conclusion



Having Linux VMs running inside FreeBSD's Bhyve is a solid move for future f3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes, eBPF, systemd) in the Linux world while keeping the steady reliability of FreeBSD.

Future uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?

This flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.

Read the next post of this series:

f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network

Other *BSD-related posts:

2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-04-01 KISS high-availability with OpenBSD
2024-01-13 One reason why I love OpenBSD
2022-10-30 Installing DTail on OpenBSD
2022-07-30 Let's Encrypt with OpenBSD and Rex
2016-04-09 Jails and ZFS with Puppet on FreeBSD

E-Mail your comments to paul@nospam.buetow.org

Back to the main site
Sharing on Social Media with Gos v1.0.0 gemini://foo.zone/gemfeed/2025-03-05-sharing-on-social-media-with-gos.gmi 2025-03-04T21:22:07+02:00 Paul Buetow aka snonux paul@dev.buetow.org As you may have noticed, I like to share on Mastodon and LinkedIn all the technical things I find interesting, and this blog post is technically all about that.

Sharing on Social Media with Gos v1.0.0



Published at 2025-03-04T21:22:07+02:00

As you may have noticed, I like to share on Mastodon and LinkedIn all the technical things I find interesting, and this blog post is technically all about that.

Gos logo

Table of Contents




Introduction



Gos is a Go-based replacement (which I wrote) for Buffer.com, providing the ability to schedule and manage social media posts from the command line. It can be run, for example, every time you open a new shell or only once every N hours when you open a new shell.

I used Buffer.com to schedule and post my social media messages for a long time. However, over time, there were more problems with that service, including a slow and unintuitive UI, and the free version only allows scheduling up to 10 messages. At one point, they started to integrate an AI assistant (which would seemingly randomly pop up in separate JavaScript-powered input boxes), and then I had enough and decided I had to build my own social sharing tool—and Gos was born.

https://buffer.com
https://codeberg.org/snonux/gos

Gos features



  • Mastodon and LinkedIn support.
  • Dry run mode for testing posts without actually publishing.
  • Configurable via flags and environment variables.
  • Easy to integrate into automated workflows.
  • OAuth2 authentication for LinkedIn.
  • Image previews for LinkedIn posts.

Installation



Prequisites



The prerequisites are:

  • Go (version 1.24 or later)
  • Supported browsers like Firefox, Chrome, etc for oauth2.

Build and install



Clone the repository:

git clone https://codeberg.org/snonux/gos.git
cd gos

Build the binaries:

go build -o gos ./cmd/gos
go build -o gosc ./cmd/gosc
sudo mv gos ~/go/bin
sudo mv gosc ~/go/bin

Or, if you want to use the Taskfile:

go-task install

Configuration



Gos requires a configuration file to store API secrets and OAuth2 credentials for each supported social media platform. The configuration is managed using a Secrets structure, which is stored as a JSON file in ~/.config/gos/gos.json.

Example Configuration File (~/.config/gos/gos.json):

{
  "MastodonURL": "https://mastodon.example.com",
  "MastodonAccessToken": "your-mastodon-access-token",
  "LinkedInClientID": "your-linkedin-client-id",
  "LinkedInSecret": "your-linkedin-client-secret",
  "LinkedInRedirectURL": "http://localhost:8080/callback",
}

Configuration fields



  • MastodonURL: The base URL of the Mastodon instance you are using (e.g., https://mastodon.social).
  • MastodonAccessToken: Your access token for the Mastodon API, which is used to authenticate your posts.
  • LinkedInClientID: The client ID for your LinkedIn app, which is needed for OAuth2 authentication.
  • LinkedInSecret: The client secret for your LinkedIn app.
  • LinkedInRedirectURL: The redirect URL configured for handling OAuth2 responses.
  • LinkedInAccessToken: Gos will automatically update this after successful OAuth2 authentication with LinkedIn.
  • LinkedInPersonID: Gos will automatically update this after successful OAuth2 authentication with LinkedIn.

Automatically managed fields



Once you finish the OAuth2 setup (after the initial run of gos), some fields—like LinkedInAccessToken and LinkedInPersonID will get filled in automatically. To check if everything's working without actually posting anything, you can run the app in dry run mode with the --dry option. After OAuth2 is successful, the file will be updated with LinkedInClientID and LinkedInAccessToken. If the access token expires, it will go through the OAuth2 process again.

Invoking Gos



Gos is a command-line tool for posting updates to multiple social media platforms. You can run it with various flags to customize its behaviour, such as posting in dry run mode, limiting posts by size, or targeting specific platforms.

Flags control the tool's behavior. Below are several common ways to invoke Gos and descriptions of the available flags.

Common flags



  • -dry: Run the application in dry run mode, simulating operations without making any changes.
  • -version: Display the current version of the application.
  • -compose: Compose a new entry. Default is set by composeEntryDefault.
  • -gosDir: Specify the directory for Gos' queue and database files. The default is ~/.gosdir.
  • —cacheDir: Specify the directory for Gos' cache. The default is based on the gosDir path.
  • -browser: Choose the browser for OAuth2 processes. The default is "firefox".
  • -configPath: Path to the configuration file. Default is ~/.config/gos/gos.json.
  • —platforms: The enabled platforms and their post size limits. The default is "Mastodon:500,LinkedIn:1000."
  • -target: Target number of posts per week. The default is 2.
  • -minQueued: Minimum number of queued items before a warning message is printed. The default is 4.
  • -maxDaysQueued: Maximum number of days' worth of queued posts before the target increases and pauseDays decreases. The default is 365.
  • -pauseDays: Number of days until the next post can be submitted. The default is 3.
  • -runInterval: Number of hours until the next post run. The default is 12.
  • —lookback: The number of days to look back in time to review posting history. The default is 30.
  • -geminiSummaryFor: Generate a Gemini Gemtext format summary specifying months as a comma-separated string.
  • -geminiCapsules: Comma-separated list of Gemini capsules. Used to detect Gemtext links.
  • -gemtexterEnable: Add special tags for Gemtexter, the static site generator, to the Gemini Gemtext summary.
  • -dev: For internal development purposes only.

Examples



*Dry run mode*

Dry run mode lets you simulate the entire posting process without actually sending the posts. This is useful for testing configurations or seeing what would happen before making real posts.

./gos --dry

*Normal run*

Sharing to all platforms is as simple as the following (assuming it is configured correctly):

./gos 

:-)

Gos Screenshot

However, you will notice that no messages are queued to be posted yet (not like on the screenshot yet!). Relax and read on...

Composing messages to be posted



To post messages using Gos, you need to create text files containing the posts' content. These files are placed inside the directory specified by the --gosDir flag (the default directory is ~/.gosdir). Each text file represents a single post and must have the .txt extension. You can also simply run gos --compose to compose a new entry. It will open simply a new text file in gosDir.

Basic structure of a message file



Each text file should contain the message you want to post on the specified platforms. That's it. Example of a Basic Post File ~/.gosdir/samplepost.txt:

This is a sample message to be posted on social media platforms.

Maybe add a link here: https://foo.zone

#foo #cool #gos #golang

The message is just arbitrary text, and, besides inline share tags (see later in this document) at the beginning, Gos does not parse any of the content other than ensuring the overall allowed size for the social media platform isn't exceeded. If it exceeds the limit, Gos will prompt you to edit the post using your standard text editor (as specified by the EDITOR environment variable). When posting, all the hyperlinks, hashtags, etc., are interpreted by the social platforms themselves (e.g., Mastodon, LinkedIn).

Adding share tags in the filename



You can control which platforms a post is shared to, and manage other behaviors using tags embedded in the filename. Add tags in the format share:platform1.-platform2 to target specific platforms within the filename. This instructs Gos to share the message only to platform1 (e.g., Mastodon) and explicitly exclude platform2 (e.g., LinkedIn). You can include multiple platforms by listing them after share:, separated by a .. Use the - symbol to exclude a platform.

Currently, only linkedin and mastodon are supported, and the shortcuts li and ma also work.

**Examples:**

  • To share only on Mastodon: ~/.gosdir/foopost.share:mastodon.txt
  • To exclude sharing on LinkedIn: ~/.gosdir/foopost.share:-linkedin.txt
  • To explicitly share on both LinkedIn and Mastodon: ~/.gosdir/foopost.share:linkedin:mastodon.txt
  • To explicitly share only on LinkedIn and exclude Mastodon: ~/.gosdir/foopost.share:linkedin:-mastodon.txt

Besides encoding share tags in the filename, they can also be embedded within the .txt file content to be queued. For example, a file named ~/.gosdir/foopost.txt with the following content:

share:mastodon The content of the post here

or

share:mastodon

The content of the post is here https://some.foo/link

#some #hashtags

Gos will parse this content, extract the tags, and queue it as ~/.gosdir/db/platforms/mastodon/foopost.share:mastodon.extracted.txt.... (see how post queueing works later in this document).

Using the prio tag



Gos randomly picks any queued message without any specific order or priority. However, you can assign a higher priority to a message. The priority determines the order in which posts are processed, with messages without a priority tag being posted last and those with priority tags being posted first. If multiple messages have the priority tag, then a random message will be selected from them.

*Examples using the Priority tag:*

  • To share only on Mastodon: ~/.gosdir/foopost.prio.share:mastodon.txt
  • To not share on LinkedIn: ~/.gosdir/foopost.prio.share:-linkedin.txt
  • To explicitly share on both: ~/.gosdir/foopost.prio.share:linkedin:mastodon.txt
  • To explicitly share on only linkedin: ~/.gosdir/foopost.prio.share:linkedin:-mastodon.txt

There is more: you can also use the soon tag. It is almost the same as the prio tag, just with one lower priority.

More tags



  • A .ask. in the filename will prompt you to choose whether to queue, edit, or delete a file before queuing it.
  • A .now. in the filename will schedule a post immediately, regardless of the target status.

So you could also have filenames like those:

  • ~/.gosdir/foopost.ask.txt
  • ~/.gosdir/foopost.now.txt
  • ~/.gosdir/foopost.ask.share:mastodon.txt
  • ~/.gosdir/foopost.ask.prio.share:mastodon.txt
  • ~/.gosdir/foopost.ask.now.share:-mastodon.txt
  • ~/.gosdir/foopost.now.share:-linkedin.txt

etc...

All of the above also works with embedded tags. E.g.:

share:mastodon,ask,prio Hello wold :-)

or

share:mastodon,ask,prio

Hello World :-)

The gosc binary



gosc stands for Gos Composer and will simply launch your $EDITOR on a new text file in the gosDir. It's the same as running gos --compose, really. It is a quick way of composing new posts. Once composed, it will ask for your confirmation on whether the message should be queued or not.

How queueing works in gos



When you place a message file in the gosDir, Gos processes it by moving the message through a queueing system before posting it to the target social media platforms. A message's lifecycle includes several key stages, from creation to posting, all managed through the ./db/platforms/PLATFORM directories.

Step-by-step queueing process



1. Inserting a Message into gosDir: You start by creating a text file that represents your post (e.g., foo.txt) and placing it in the gosDir. When Gos runs, this file is processed. The easiest way is to use gosc here.

2. Moving to the Queue: Upon running Gos, the tool identifies the message in the gosDir and places it into the queue for the specified platform. The message is moved into the appropriate directory for each platform in ./db/platforms/PLATFORM. During this stage, the message file is renamed to include a timestamp indicating when it was queued and given a .queued extension.

*Example: If a message is queued for LinkedIn, the filename might look like this:*

~/.gosdir/db/platforms/linkedin/foo.share:-mastodon.txt.20241022-102343.queued

3. Posting the Message: Once a message is placed in the queue, Gos posts it to the specified social media platforms.

4. Renaming to .posted: After a message is successfully posted to a platform, the corresponding .queued file is renamed to have a .posted extension, and the filename timestamp is also updated. This signals that the post has been processed and published.

*Example - After a successful post to LinkedIn, the message file might look like this:*

./db/platforms/linkedin/foo.share:-mastodon.txt.20241112-121323.posted

How message selection works in gos



Gos decides which messages to post using a combination of priority, platform-specific tags, and timing rules. The message selection process ensures that messages are posted according to your configured cadence and targets while respecting pauses between posts and previously met goals.

The key factors in message selection are:

  • Target Number of Posts Per Week: The -target flag defines how many posts per week should be made to a specific platform. This target helps Gos manage the posting rate, ensuring that the right number of posts are made without exceeding the desired frequency.
  • Post History Lookback: The -lookback flag tells Gos how many days back to look in the post history to calculate whether the weekly post target has already been met. It ensures that previously posted content is considered before deciding to queue up another message.
  • Message Priority: Messages with no priority value are processed after those with priority. If two messages have the same priority, one is selected randomly.
  • Pause Between Posts: The -pauseDays flag allows you to specify a minimum number of days to wait between posts for the same platform. This prevents oversaturation of content and ensures that posts are spread out over time.

Database replication



I simply use Syncthing to backup/sync my gosDir. Note, that I run Gos on my personal laptop. No need to run it from a server.

https://syncthing.net

Post summary as gemini gemtext



For my blog, I want to post a summary of all the social messages posted over the last couple of months. For an example, have a look here:

./2025-01-01-posts-from-october-to-december-2024.html

To accomplish this, run:

gos --geminiSummaryFor 202410,202411,202412

This outputs the summary for the three specified months, as shown in the example. The summary includes posts from all social media networks but removes duplicates.

Also, add the --gemtexterEnable flag, if you are using Gemtexter:


gos --gemtexterEnable --geminiSummaryFor 202410,202411,202412

Gemtexter

In case there are HTTP links that translate directly to the Geminispace for certain capsules, specify the Gemini capsules as a comma-separated list as follows:

gos --gemtexterEnable --geminiSummaryFor 202410,202411,202412 --geminiCapsules "foo.zone,paul.buetow.org"

It will then also generate Gemini Gemtext links in the summary page and flag them with (Gemini).

Conclusion



Overall, this was a fun little Go project with practical use for me personally. I hope you also had fun reading this, and maybe you will use it as well.

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
Random Weird Things - Part Ⅱ gemini://foo.zone/gemfeed/2025-02-08-random-weird-things-ii.gmi 2025-02-08T11:06:16+02:00 Paul Buetow aka snonux paul@dev.buetow.org Every so often, I come across random, weird, and unexpected things on the internet. I thought it would be neat to share them here from time to time. This is the second run.

Random Weird Things - Part Ⅱ



Published at 2025-02-08T11:06:16+02:00

Every so often, I come across random, weird, and unexpected things on the internet. I thought it would be neat to share them here from time to time. This is the second run.

2024-07-05 Random Weird Things - Part Ⅰ
2025-02-08 Random Weird Things - Part Ⅱ (You are currently reading this)

/\_/\           /\_/\
( o.o ) WHOA!! ( o.o )
> ^ <           > ^ <
/   \    MOEEW! /   \
/______\       /______\

Table of Contents




11. The SQLite codebase is a gem



Check this out:

SQLite Gem

Source:

https://wetdry.world/@memes/112717700557038278

Go Programming



12. Official Go font



The Go programming language has an official font called "Go Font." It was created to complement the aesthetic of the Go language, ensuring clear and legible rendering of code. The font includes a monospace version for code and a proportional version for general text, supporting consistent look and readability in Go-related materials and development environments.

Check out some Go code displayed using the Go font:

Go font code

https://go.dev/blog/go-fonts

The design emphasizes simplicity and readability, reflecting Go's philosophy of clarity and efficiency.

I found it interesting and/or weird, as Go is a programming language. Why should it bother having its own font? I have never seen another open-source project like Go do this. But I also like it. Maybe I will use it in the future for this blog :-)

13. Go functions can have methods



Functions on struct types? Well known. Functions on types like int and string? It's also known of, but a bit lesser. Functions on function types? That sounds a bit funky, but it's possible, too! For demonstration, have a look at this snippet:

package main

import "log"

type fun func() string

func (f fun) Bar() string {
        return "Bar"
}

func main() {
        var f fun = func() string {
                return "Foo"
        }
        log.Println("Example 1: ", f())
        log.Println("Example 2: ", f.Bar())
        log.Println("Example 3: ", fun(f.Bar).Bar())
        log.Println("Example 4: ", fun(fun(f.Bar).Bar).Bar())
}

It runs just fine:

❯ go run main.go
2025/02/07 22:56:14 Example 1:  Foo
2025/02/07 22:56:14 Example 2:  Bar
2025/02/07 22:56:14 Example 3:  Bar
2025/02/07 22:56:14 Example 4:  Bar

macOS



For personal computing, I don't use Apple, but I have to use it for work.

14. ß and ss are treated the same



Know German? In German, the letter "sharp s" is written as ß. ß is treated the same as ss on macOS.

On a case-insensitive file system like macOS, not only are uppercase and lowercase letters treated the same, but non-Latin characters like the German "ß" are also considered equivalent to their Latin counterparts (in this case, "ss").

So, even though "Maß" and "Mass" are not strictly equivalent, the macOS file system still treats them as the same filename due to its handling of Unicode characters. This can sometimes lead to unexpected behaviour. Check this out:

❯ touch Maß
❯ ls -l
-rw-r--r--@ 1 paul  wheel  0 Feb  7 23:02 Maß
❯ touch Mass
❯ ls -l
-rw-r--r--@ 1 paul  wheel  0 Feb  7 23:02 Maß
❯ rm Mass
❯ ls -l

❯ touch Mass
❯ ls -ltr
-rw-r--r--@ 1 paul  wheel  0 Feb  7 23:02 Mass
❯ rm Maß
❯ ls -l


15. Colon as file path separator



MacOS can use the colon as a file path separator on its ADFS (file system). A typical ADFS file pathname on a hard disc might be:

ADFS::4.$.Documents.Techwriter.Myfile

I can't reproduce this on my (work) Mac, though, as it now uses the APFS file system. In essence, ADFS is an older file system, while APFS is a contemporary file system optimized for Apple's modern devices.

https://social.jvns.ca/@b0rk/113041293527832730

16. Polyglots - programs written in multiple languages



A coding polyglot is a program or script written so that it can be executed in multiple programming languages without modification. This is typically achieved by leveraging syntax overlaps or crafting valid and meaningful code in each targeted language. Polyglot programs are often created as a challenge or for demonstration purposes to showcase language similarities or clever coding techniques.

Check out my very own polyglot:

The fibonatti.pl.c Polyglot

17. Languages, where indices start at 1



Array indices start at 1 instead of 0 in some programming languages, known as one-based indexing. This can be controversial because zero-based indexing is more common in popular languages like C, C++, Java, and Python. One-based indexing can lead to off-by-one errors when developers switch between languages with different indexing schemes.

Languages with One-Based Indexing:

  • Fortran
  • MATLAB
  • Lua
  • R (for vectors and lists)
  • Smalltalk
  • Julia (by default, although zero-based indexing is also possible)

foo.lua example:

arr = {10, 20, 30, 40, 50}
print(arr[1]) -- Accessing the first element

❯ lua foo.lua
10

One-based indexing is more natural for human-readable, mathematical, and theoretical contexts, where counting traditionally starts from one.

18. Perl Poetry



Perl Poetry is a playful and creative practice within the programming community where Perl code is written as a poem. These poems are crafted to be syntactically valid Perl code and make sense as poetic text, often with whimsical or humorous intent. This showcases Perl's flexibility and expressiveness, as well as the creativity of its programmers.

See this Poetry of my own; the Perl interpreter does not yield any syntax error parsing that. But also, the Peom doesn't do anything useful then executed:

# (C) 2006 by Paul C. Buetow

Christmas:{time;#!!!

Children: do tell $wishes;

Santa: for $each (@children) { 
BEGIN { read $each, $their, wishes and study them; use Memoize#ing

} use constant gift, 'wrapping'; 
package Gifts; pack $each, gift and bless $each and goto deliver
or do import if not local $available,!!! HO, HO, HO;

redo Santa, pipe $gifts, to_childs;
redo Santa and do return if last one, is, delivered; 

deliver: gift and require diagnostics if our $gifts ,not break;
do{ use NEXT; time; tied $gifts} if broken and dump the, broken, ones;
The_children: sleep and wait for (each %gift) and try { to => untie $gifts };

redo Santa, pipe $gifts, to_childs;
redo Santa and do return if last one, is, delivered; 

The_christmas_tree: formline s/ /childrens/, $gifts;
alarm and warn if not exists $Christmas{ tree}, @t, $ENV{HOME};  
write <<EMail
 to the parents to buy a new christmas tree!!!!111
 and send the
EMail
;wait and redo deliver until defined local $tree;

redo Santa, pipe $gifts, to_childs;
redo Santa and do return if last one, is, delivered ;}

END {} our $mission and do sleep until next Christmas ;}

__END__

This is perl, v5.8.8 built for i386-freebsd-64int

More Perl Poetry of mine

19. CSS3 is turing complete



CSS3 is Turing complete because it can simulate a Turing machine using only CSS animations and styles without any JavaScript or external logic. This is achieved by using keyframe animations to change the styles of HTML elements in a way that encodes computation, performing calculations and state transitions.

Is CSS turing complete?

It is surprising because CSS is primarily a styling language intended for the presentation layer of web pages, not for computation or logic. Its capability to perform complex computations defies its typical use case and showcases the unintended computational power that can emerge from the creative use of seemingly straightforward technologies.

Check out this 100% CSS implementation of the Conways Game of Life:



CSS Conways Game of Life

Conway's Game of Life is Turing complete because it can simulate a universal Turing machine, meaning it can perform any computation that a computer can, given the right initial conditions and sufficient time and space. Suppose a language can implement Conway's Game of Life. In that case, it demonstrates the language's ability to handle complex state transitions and computations. It has the necessary constructs (like iteration, conditionals, and data manipulation) to simulate any algorithm, thus confirming its Turing completeness.

20. The biggest shell programs



One would think that shell scripts are only suitable for small tasks. Well, I must be wrong, as there are huge shell programs out there (up to 87k LOC) which aren't auto-generated but hand-written!

The Biggest Sell Programs in the World

My Gemtexter (bash) is only 1329 LOC as of now. So it's tiny.

Gemtexter - One Bash script to rule it all

I hope you had some fun. E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts gemini://foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.gmi 2025-01-30T09:22:06+02:00 Paul Buetow aka snonux paul@dev.buetow.org This is the third blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution we will use on FreeBSD-based physical machines.

f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts



Published at 2025-01-30T09:22:06+02:00

This is the third blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution we will use on FreeBSD-based physical machines.

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts (You are currently reading this)
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network

f3s logo

Table of Contents




Introduction



In this blog post, we are setting up the UPS for the cluster. A UPS, or Uninterruptible Power Supply, safeguards my cluster from unexpected power outages and surges. It acts as a backup battery that kicks in when the electricity cuts out—especially useful in my area, where power cuts are frequent—allowing for a graceful system shutdown and preventing data loss and corruption. This is especially important since I will also store some of my data on the f3s nodes.

Changes since last time



FreeBSD upgrade from 14.1 to 14.2



There has been a new release since the last blog post in this series. The upgrade from 14.1 was as easy as:

paul@f0: ~ % doas freebsd-update fetch
paul@f0: ~ % doas freebsd-update install
paul@f0: ~ % doas freebsd-update -r 14.2-RELEASE upgrade
paul@f0: ~ % doas freebsd-update install
paul@f0: ~ % doas shutdown -r now

And after rebooting, I ran:

paul@f0: ~ % doas freebsd-update install
paul@f0: ~ % doas pkg update
paul@f0: ~ % doas pkg upgrade
paul@f0: ~ % doas shutdown -r now

And after another reboot, I was on 14.2:

paul@f0:~ % uname -a
FreeBSD f0.lan.buetow.org 14.2-RELEASE FreeBSD 14.2-RELEASE 
 releng/14.2-n269506-c8918d6c7412 GENERIC amd64

And, of course, I ran this on all 3 nodes!

A new home (behind the TV)



I've put all the infrastructure behind my TV, as plenty of space is available. The TV hides most of the setup, which drastically improved the SAF (spouse acceptance factor).

New hardware placement arrangement

I got rid of the mini-switch I mentioned in the previous blog post. I have the TP-Link EAP615-Wall mounted on the wall nearby, which is my OpenWrt-powered Wi-Fi hotspot. It also has 3 Ethernet ports, to which I connected the Beelink nodes. That's the device you see at the very top.

The Ethernet cables go downward through the cable boxes to the Beelink nodes. In addition to the Beelink f3s nodes, I connected the TP-Link to the UPS as well (not discussed further in this blog post, but the positive side effect is that my Wi-Fi will still work during a power loss for some time—and during a power cut, the Beelink nodes will still be able to communicate with each other).

On the very left (the black box) is the UPS, with four power outlets. Three go to the Beelink nodes, and one goes to the TP-Link. A USB output is also connected to the first Beelink node, f0.

On the very right (halfway hidden behind the TV) are the 3 Beelink nodes stacked on top of each other. The only downside (or upside?) is that my 14-month-old daughter is now chaos-testing the Beelink nodes, as the red power buttons (now reachable for her) are very attractive for her to press when passing by randomly. :-) Luckily, that will only cause graceful system shutdowns!

The UPS hardware



I wanted a UPS that I could connect to via FreeBSD, and that would provide enough backup power to operate the cluster for a couple of minutes (it turned out to be around an hour, but this time will likely be shortened after future hardware upgrades, like additional drives and a backup enclosure) and to automatically initiate the shutdown of all the f3s nodes.

I decided on the APC Back-UPS BX750MI model because:

  • Zero noise level when there is no power cut (some light noise when the battery is in operation during a power cut).
  • Cost: It is relatively affordable (not costing thousands).
  • USB connectivity: Can be connected via USB to one of the FreeBSD hosts to read the UPS status.
  • A power output of 750VA (or 410 watts), suitable for an hour of runtime for my f3s nodes (plus the Wi-Fi router).
  • Multiple power outlets: Can connect all 3 f3s nodes directly.
  • User-replaceable batteries: I can replace the batteries myself after two years or more (depending on usage).
  • Its compact design. Overall, I like how it looks.

The APC Back-UPS BX750MI in operation.

Configuring FreeBSD to Work with the UPS



USB Device Detection



Once plugged in via USB on FreeBSD, I could see the following in the kernel messages:

paul@f0: ~ % doas dmesg | grep UPS
ugen0.2: <American Power Conversion Back-UPS BX750MI> at usbus0

apcupsd Installation



To make use of the USB connection, the apcupsd package had to be installed:

paul@f0: ~ % doas install apcupsd

I have made the following modifications to the configuration file so that the UPS can be used via the USB interface:

paul@f0:/usr/local/etc/apcupsd % diff -u apcupsd.conf.sample  apcupsd.conf
--- apcupsd.conf.sample 2024-11-01 16:40:42.000000000 +0200
+++ apcupsd.conf        2024-12-03 10:58:24.009501000 +0200
@@ -31,7 +31,7 @@
 #     940-1524C, 940-0024G, 940-0095A, 940-0095B,
 #     940-0095C, 940-0625A, M-04-02-2000
 #
-UPSCABLE smart
+UPSCABLE usb

 # To get apcupsd to work, in addition to defining the cable
 # above, you must also define a UPSTYPE, which corresponds to
@@ -88,8 +88,10 @@
 #                            that apcupsd binds to that particular unit
 #                            (helpful if you have more than one USB UPS).
 #
-UPSTYPE apcsmart
-DEVICE /dev/usv
+UPSTYPE usb
+DEVICE

 # POLLTIME <int>
 #   Interval (in seconds) at which apcupsd polls the UPS for status. This

I left the remaining settings as the default ones; for example, the following are of main interest:

# If during a power failure, the remaining battery percentage
# (as reported by the UPS) is below or equal to BATTERYLEVEL,
# apcupsd will initiate a system shutdown.
BATTERYLEVEL 5

# If during a power failure, the remaining runtime in minutes
# (as calculated internally by the UPS) is below or equal to MINUTES,
# apcupsd, will initiate a system shutdown.
MINUTES 3

I then enabled and started the daemon:

paul@f0:/usr/local/etc/apcupsd % doas sysrc apcupsd_enable=YES
apcupsd_enable:  -> YES
paul@f0:/usr/local/etc/apcupsd % doas service apcupsd start
Starting apcupsd.

UPS Connectivity Test



And voila, I could now access the UPS information via the apcaccess command; how convenient :-) (I also read through the manual page, which provides a good understanding of what else can be done with it!).

paul@f0:~ % apcaccess
APC      : 001,035,0857
DATE     : 2025-01-26 14:43:27 +0200
HOSTNAME : f0.lan.buetow.org
VERSION  : 3.14.14 (31 May 2016) freebsd
UPSNAME  : f0.lan.buetow.org
CABLE    : USB Cable
DRIVER   : USB UPS Driver
UPSMODE  : Stand Alone
STARTTIME: 2025-01-26 14:43:25 +0200
MODEL    : Back-UPS BX750MI
STATUS   : ONLINE
LINEV    : 230.0 Volts
LOADPCT  : 4.0 Percent
BCHARGE  : 100.0 Percent
TIMELEFT : 65.3 Minutes
MBATTCHG : 5 Percent
MINTIMEL : 3 Minutes
MAXTIME  : 0 Seconds
SENSE    : Medium
LOTRANS  : 145.0 Volts
HITRANS  : 295.0 Volts
ALARMDEL : No alarm
BATTV    : 13.6 Volts
LASTXFER : Automatic or explicit self test
NUMXFERS : 0
TONBATT  : 0 Seconds
CUMONBATT: 0 Seconds
XOFFBATT : N/A
SELFTEST : NG
STATFLAG : 0x05000008
SERIALNO : 9B2414A03599
BATTDATE : 2001-01-01
NOMINV   : 230 Volts
NOMBATTV : 12.0 Volts
NOMPOWER : 410 Watts
END APC  : 2025-01-26 14:44:06 +0200

APC Info on Partner Nodes:



So far, so good. Host f0 would shut down itself when short on power. But what about the f1 and f2 nodes? They aren't connected directly to the UPS and, therefore, wouldn't know that their power is about to be cut off. For this, apcupsd running on the f1 and f2 nodes can be configured to retrieve UPS information via the network from the apcupsd server running on the f0 node, which is connected directly to the APC via USB.

Of course, this won't work when f0 is down. In this case, no operational node would be connected to the UPS via USB; therefore, the current power status would not be known. However, I consider this a rare circumstance. Furthermore, in case of an f0 system crash, sudden power outages on the two other nodes would occur at different times making real data loss (the main concern here) less likely.

And if f0 is down and f1 and f2 receive new data and crash midway, it's likely that a client (e.g., an Android app or another laptop) still has the data stored on it, making data recoverable and data loss overall nearly impossible. I'd receive an alert if any of the nodes go down (more on monitoring later in this blog series).

Installation on partners



To do this, I installed apcupsd via doas pkg install apcupsd on f1 and f2, and then I could connect to it this way:

paul@f1:~ % apcaccess -h f0.lan.buetow.org | grep Percent
LOADPCT  : 12.0 Percent
BCHARGE  : 94.0 Percent
MBATTCHG : 5 Percent

But I want the daemon to be configured and enabled in such a way that it connects to the master UPS node (the one with the UPS connected via USB) so that it can also initiate a system shutdown when the UPS battery reaches low levels. For that, apcupsd itself needs to be aware of the UPS status.

On f1 and f2, I changed the configuration to use f0 (where apcupsd is listening) as a remote device. I also changed the MINUTES setting from 3 to 6 and the BATTERYLEVEL setting from 5 to 10 to ensure that the f1 and f2 nodes could still connect to the f0 node for UPS information before f0 decides to shut down itself. So f1 and f2 must shut down earlier than f0:

paul@f2:/usr/local/etc/apcupsd % diff -u apcupsd.conf.sample apcupsd.conf
--- apcupsd.conf.sample 2024-11-01 16:40:42.000000000 +0200
+++ apcupsd.conf        2025-01-26 15:52:45.108469000 +0200
@@ -31,7 +31,7 @@
 #     940-1524C, 940-0024G, 940-0095A, 940-0095B,
 #     940-0095C, 940-0625A, M-04-02-2000
 #
-UPSCABLE smart
+UPSCABLE ether

 # To get apcupsd to work, in addition to defining the cable
 # above, you must also define a UPSTYPE, which corresponds to
@@ -52,7 +52,6 @@
 #                            Network Information Server. This is used if the
 #                            UPS powering your computer is connected to a
 #                            different computer for monitoring.
-#
 # snmp      hostname:port:vendor:community
 #                            SNMP network link to an SNMP-enabled UPS device.
 #                            Hostname is the ip address or hostname of the UPS
@@ -88,8 +87,8 @@
 #                            that apcupsd binds to that particular unit
 #                            (helpful if you have more than one USB UPS).
 #
-UPSTYPE apcsmart
-DEVICE /dev/usv
+UPSTYPE net
+DEVICE f0.lan.buetow.org:3551

 # POLLTIME <int>
 #   Interval (in seconds) at which apcupsd polls the UPS for status. This
@@ -147,12 +146,12 @@
 # If during a power failure, the remaining battery percentage
 # (as reported by the UPS) is below or equal to BATTERYLEVEL,
 # apcupsd will initiate a system shutdown.
-BATTERYLEVEL 5
+BATTERYLEVEL 10

 # If during a power failure, the remaining runtime in minutes
 # (as calculated internally by the UPS) is below or equal to MINUTES,
 # apcupsd, will initiate a system shutdown.
-MINUTES 3
+MINUTES 6

 # If during a power failure, the UPS has run on batteries for TIMEOUT
 # many seconds or longer, apcupsd will initiate a system shutdown.

So I also ran the following commands on f1 and f2:

paul@f1:/usr/local/etc/apcupsd % doas sysrc apcupsd_enable=YES
apcupsd_enable:  -> YES
paul@f1:/usr/local/etc/apcupsd % doas service apcupsd start
Starting apcupsd.

And then I was able to connect to localhost via the apcaccess command:

paul@f1:~ % doas apcaccess | grep Percent
LOADPCT  : 5.0 Percent
BCHARGE  : 95.0 Percent
MBATTCHG : 5 Percent

Power outage simulation



Pulling the plug



I simulated a power outage by removing the power input from the APC. Immediately, the following message appeared on all the nodes:

Broadcast Message from root@f0.lan.buetow.org
        (no tty) at 15:03 EET...

Power failure. Running on UPS batteries.                                              

I ran the following command to confirm the available battery time:

paul@f0:/usr/local/etc/apcupsd % apcaccess -p TIMELEFT
63.9 Minutes

And after around one hour (f1 and f2 a bit earlier, f0 a bit later due to the different BATTERYLEVEL and MINUTES settings outlined earlier), the following broadcast was sent out:

Broadcast Message from root@f0.lan.buetow.org
        (no tty) at 15:08 EET...

        *** FINAL System shutdown message from root@f0.lan.buetow.org ***

System going down IMMEDIATELY

apcupsd initiated shutdown

And all the nodes shut down safely before the UPS ran out of battery!

Restoring power



After restoring power, I checked the logs in /var/log/daemon.log and found the following on all 3 nodes:

Jan 26 17:36:24 f2 apcupsd[2159]: Power failure.
Jan 26 17:36:30 f2 apcupsd[2159]: Running on UPS batteries.
Jan 26 17:36:30 f2 apcupsd[2159]: Battery charge below low limit.
Jan 26 17:36:30 f2 apcupsd[2159]: Initiating system shutdown!
Jan 26 17:36:30 f2 apcupsd[2159]: User logins prohibited
Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd exiting, signal 15
Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd shutdown succeeded

All good :-)

Conclusion



I have the same UPS (but with a bit more capacity) for my main work setup, which powers my 28" screen, music equipment, etc. It has already been helpful a couple of times during power outages here, so I am sure that the smaller UPS for the F3s setup will be of great use.

Read the next post of this series:

f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs

Other BSD related posts are:

2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts (You are currently reading this)
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-04-01 KISS high-availability with OpenBSD
2024-01-13 One reason why I love OpenBSD
2022-10-30 Installing DTail on OpenBSD
2022-07-30 Let's Encrypt with OpenBSD and Rex
2016-04-09 Jails and ZFS with Puppet on FreeBSD

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
Working with an SRE Interview gemini://foo.zone/gemfeed/2025-01-15-working-with-an-sre-interview.gmi 2025-01-15T00:16:04+02:00 Paul Buetow aka snonux paul@dev.buetow.org I have been interviewed by Florian Buetow on `cracking-ai-engineering.com` about what it's like working with a Site Reliability Engineer from the point of view of a Software Engineer, Data Scientist, and AI Engineer.

Working with an SRE Interview



Published at 2025-01-15T00:16:04+02:00

I have been interviewed by Florian Buetow on cracking-ai-engineering.com about what it's like working with a Site Reliability Engineer from the point of view of a Software Engineer, Data Scientist, and AI Engineer.

See original interview here
Cracking AI Engineering

Below, I am posting the interview here on my blog as well.

Table of Contents




Preamble



In this insightful interview, Paul Bütow, a Principal Site Reliability Engineer at Mimecast, shares over a decade of experience in the field. Paul highlights the role of an Embedded SRE, emphasizing the importance of automation, observability, and effective incident management. We also focused on the key question of how you can work effectively with an SRE weather you are an individual contributor or a manager, a software engineer or data scientist. And how you can learn more about site reliability engineering.

Introducing Paul



Hi Paul, please introduce yourself briefly to the audience. Who are you, what do you do for a living, and where do you work?

My name is Paul Bütow, I work at Mimecast, and I’m a Principal Site Reliability Engineer there. I’ve been with Mimecast for almost ten years now. The company specializes in email security, including things like archiving, phishing detection, malware protection, and spam filtering.

You mentioned that you’re an ‘Embedded SRE.’ What does that mean exactly?

It means that I’m directly part of the software engineering team, not in a separate Ops department. I ensure that nothing is deployed manually, and everything runs through automation. I also set up monitoring and observability. These are two distinct aspects: monitoring alerts us when something breaks, while observability helps us identify trends. I also create runbooks so we know what to do when specific incidents occur frequently.

Infrastructure SREs on the other hand handle the foundational setup, like providing the Kubernetes cluster itself or ensuring the operating systems are installed. They don't work on the application directly but ensure the base infrastructure is there for others to use. This works well when a company has multiple teams that need shared infrastructure.

How did you get started?



How did your interest in Linux or FreeBSD start?

It began during my school days. We had a PC with DOS at home, and I eventually bought Suse Linux 5.3. Shortly after, I discovered FreeBSD because I liked its handbook so much. I wanted to understand exactly how everything worked, so I also tried Linux from Scratch. That involves installing every package manually to gain a better understanding of operating systems.

https://www.FreeBSD.org
https://linuxfromscratch.org/

And after school, you pursued computer science, correct?

Exactly. I wasn’t sure at first whether I wanted to be a software developer or a system administrator. I applied for both and eventually accepted an offer as a Linux system administrator. This was before 'SRE' became a buzzword, but much of what I did back then-automation, infrastructure as code, monitoring-is now considered part of the typical SRE role.

Roles and Career Progression



Tell us about how you joined Mimecast. When did you fully embrace the SRE role?

I started as a Linux sysadmin at 1&1. I managed an ad server farm with hundreds of systems and later handled load balancers. Together with an architect, we managed F5 load balancers distributing around 2,000 services, including for portals like web.de and GMX. I also led the operations team technically for a while before moving to London to join Mimecast.

At Mimecast, the job title was explicitly 'Site Reliability Engineer.' The biggest difference was that I was no longer in a separate Ops department but embedded directly within the storage and search backend team. I loved that because we could plan features together-from automation to measurability and observability. Mimecast also operates thousands of physical servers for email archiving, which was fascinating since I already had experience with large distributed systems at 1&1. It was the right step for me because it allowed me to work close to the code while remaining hands-on with infrastructure.

What are the differences between SRE, DevOps, SysAdmin, and Architects?

SREs are like the next step after SysAdmins. A SysAdmin might manually install servers, replace disks, or use simple scripts for automation, while SREs use infrastructure as code and focus on reliability through SLIs, SLOs, and automation. DevOps isn’t really a job-it’s more of a way of working, where developers are involved in operations tasks like setting up CI/CD pipelines or on-call shifts. Architects focus on designing systems and infrastructures, such as load balancers or distributed systems, working alongside SREs to ensure the systems meet the reliability and scalability requirements. The specific responsibilities of each role depend on the company, and there is often overlap.

What are the most important reliability lessons you’ve learned so far?

  • Don’t leave SRE aspects as an afterthought. It’s much better to discuss automation, monitoring, SLIs, and SLOs early on. Traditional sysadmins often installed systems manually, but today, we do everything via infrastructure as code-using tools like Terraform or Puppet.
  • I also distinguish between monitoring and observability. Monitoring tells us, 'The server is down, alarm!' Observability dives deeper, showing trends like increasing latency so we can act proactively.
  • SLI, SLO, and SLA are core elements. We focus on what users actually experience-for example, how quickly an email is sent-and set our goals accordingly.
  • Runbooks are also crucial. When something goes wrong at night, you don’t want to start from scratch. A runbook outlines how to debug and resolve specific problems, saving time and reducing downtime.

Anecdotes and Best Practices



Runbooks sound very practical. Can you explain how they’re used day-to-day?

Runbooks are essentially guides for handling specific incidents. For instance, if a service won’t start, the runbook will specify where the logs are and which commands to use. Observability takes it a step further, helping us spot changes early-like rising error rates or latency-so we can address issues before they escalate.

When should you decide to put something into a runbook, and when is it unnecessary?

If an issue happens frequently, it should be documented in a runbook so that anyone, even someone new, can follow the steps to fix it. The idea is that 90% of the common incidents should be covered. For example, if a service is down, the runbook would specify where to find logs, which commands to check, and what actions to take. On the other hand, rare or complex issues, where the resolution depends heavily on context or varies each time, don’t make sense to include in detail. For those, it’s better to focus on general troubleshooting steps.

How do you search for and find the correct runbooks?

Runbooks should be linked directly in the alert you receive. For example, if you get an alert about a service not running, the alert will have a link to the runbook that tells you what to check, like logs or commands to run. Runbooks are best stored in an internal wiki, so if you don’t find the link in the alert, you know where to search. The important thing is that runbooks are easy to find and up to date because that’s what makes them useful during incidents.

Do you have an interesting war story you can share with us?

Sure. At 1&1, we had a proprietary ad server software that ran a SQL query during startup. The query got slower over time, eventually timing out and preventing the server from starting. Since we couldn’t access the source code, we searched the binary for the SQL and patched it. By pinpointing the issue, a developer was able to adjust the SQL. This collaboration between sysadmin and developer perspectives highlights the value of SRE work.

Working with Different Teams



You’re embedded in a team-how does collaboration with developers work practically?

We plan everything together from the start. If there’s a new feature, we discuss infrastructure, automated deployments, and monitoring right away. Developers are experts in the code, and I bring the infrastructure expertise. This avoids unpleasant surprises before going live.

How about working with data scientists or ML engineers? Are there differences?

The principles are the same. ML models also need to be deployed and monitored. You deal with monitoring, resource allocation, and identifying performance drops. Whether it’s a microservice or an ML job, at the end of the day, it’s all running on servers or clusters that must remain stable.

What about working with managers or the FinOps team?

We often discuss costs, especially in the cloud, where scaling up resources is easy. It’s crucial to know our metrics: do we have enough capacity? Do we need all instances? Or is the CPU only at 5% utilization? This data helps managers decide whether the budget is sufficient or if optimizations are needed.

Do you have practical tips for working with SREs?

Yes, I have a few:

  • Early involvement: Include SREs from the beginning in your project.
  • Runbooks & documentation: Document recurring errors.
  • Try first: Try to understand the issue yourself before immediately asking the SRE.
  • Basic infra knowledge: Kubernetes and Terraform aren’t magic. Some basic understanding helps every developer.

Using AI Tools



Let’s talk about AI. How do you use it in your daily work?

For boilerplate code, like Terraform snippets, I often use ChatGPT. It saves time, although I always review and adjust the output. Log analysis is another exciting application. Instead of manually going through millions of lines, AI can summarize key outliers or errors.

Do you think AI could largely replace SREs or significantly change the role?

I see AI as an additional tool. SRE requires a deep understanding of how distributed systems work internally. While AI can assist with routine tasks or quickly detect anomalies, human expertise is indispensable for complex issues.

SRE Learning Resources



What resources would you recommend for learning about SRE?

The Google SRE book is a classic, though a bit dry. I really like 'Seeking SRE,' as it offers various perspectives on SRE, with many practical stories from different companies.

https://sre.google/books/
Seeking SRE

Do you have a podcast recommendation?

The Google SRE prodcast is quite interesting. It offers insights into how Google approaches SRE, along with perspectives from external guests.

https://sre.google/prodcast/

Blogging



You also have a blog. What motivates you to write regularly?

Writing helps me learn the most. It also serves as a personal reference. Sometimes I look up how I solved a problem a year ago. And of course, others tackling similar projects might find inspiration in my posts.

What do you blog about?

Mostly technical topics I find exciting, like homelab projects, Kubernetes, or book summaries on IT and productivity. It’s a personal blog, so I write about what I enjoy.

Wrap-up



To wrap up, what are three things every team should keep in mind for stability?

First, maintain runbooks and documentation to avoid chaos at night. Second, automate everything-manual installs in production are risky. Third, define SLIs, SLOs, and SLAs early so everyone knows what we’re monitoring and guaranteeing.

Is there a motto or mindset that particularly inspires you as an SRE?

"Keep it simple and stupid"-KISS. Not everything has to be overly complex. And always stay curious. I’m still fascinated by how systems work under the hood.

Where can people find you online?

You can find links to my socials on my website paul.buetow.org
I regularly post articles and link to everything else I’m working on outside of work.

https://paul.buetow.org

Thank you very much for your time and this insightful interview into the world of site reliability engineering

My pleasure, this was fun.

Closing comments



Dear reader, I hope this conversation with Paul Bütow provided an exciting peak into the world of Site Reliability Engineering. Whether you’re a software developer, data scientist, ML engineer, or manager, reliable systems are always a team effort. Hopefully, you’ve taken some insights or tips from Paul’s experiences for your own team or next project. Thanks for joining us, and best of luck refining your own SRE practices!

E-Mail your comments to paul@nospam.buetow.org or contact Florian via the Cracking AI Engineering :-)

Back to the main site
Posts from October to December 2024 gemini://foo.zone/gemfeed/2025-01-01-posts-from-october-to-december-2024.gmi 2024-12-31T18:09:58+02:00 Paul Buetow aka snonux paul@dev.buetow.org Happy new year!

Posts from October to December 2024



Published at 2024-12-31T18:09:58+02:00

Happy new year!

These are my social media posts from the last three months. I keep them here to reflect on them and also to not lose them. Social media networks come and go and are not under my control, but my domain is here to stay.

These are from Mastodon and LinkedIn. Have a look at my about page for my social media profiles. This list is generated with Gos, my social media platform sharing tool.

My about page
https://codeberg.org/snonux/gos

Table of Contents




October 2024



First on-call experience in a startup. Doesn't ...



First on-call experience in a startup. Doesn't sound a lot of fun! But the lessons were learned! #sre

ntietz.com/blog/lessons-from-my-first-on-call/

Reviewing your own PR or MR before asking ...



Reviewing your own PR or MR before asking others to review it makes a lot of sense. Have seen so many silly mistakes which would have been avoided. Saving time for the real reviewer.

www.jvt.me/posts/2019/01/12/self-code-review/

Fun with defer in #golang, I did't know, that ...



Fun with defer in #golang, I did't know, that a defer object can either be heap or stack allocated. And there are some rules for inlining, too.

victoriametrics.com/blog/defer-in-go/

I have been in incidents. Understandably, ...



I have been in incidents. Understandably, everyone wants the issue to be resolved as quickly and others want to know how long TTR will be. IMHO, providing no estimates at all is no solution either. So maybe give a rough estimate but clearly communicate that the estimate is rough and that X, Y, and Z can interfere, meaning there is a chance it will take longer to resolve the incident. Just my thought. What's yours?

firehydrant.com/blog/hot-take-dont-provide-incident-resolution-estimates/

Little tips using strings in #golang and I ...



Little tips using strings in #golang and I personally think one must look more into the std lib (not just for strings, also for slices, maps,...), there are tons of useful helper functions.

www.calhoun.io/6-tips-for-using-strings-in-go/

Reading this post about #rust (especially the ...



Reading this post about #rust (especially the first part), I think I made a good choice in deciding to dive into #golang instead. There was a point where I wanted to learn a new programming language, and Rust was on my list of choices. I think the Go project does a much better job of deciding what goes into the language and how. What are your thoughts?

josephg.com/blog/rewriting-rust/

The opposite of #ChaosMonkey ... ...



The opposite of #ChaosMonkey ... automatically repairing and healing services helping to reduce manual toil work. Runbooks and scripts are only the first step, followed by a fully blown service written in Go. Could be useful, but IMHO why not rather address the root causes of the manual toil work? #sre

blog.cloudflare.com/nl-nl/improving-platform-resilience-at-cloudflare/

November 2024



I just became a Silver Patreon for OSnews. What ...



I just became a Silver Patreon for OSnews. What is OSnews? It is an independent news site about IT. It is slightly independent and, at times, alternative. I have enjoyed it since my early student days. This one and other projects I financially support are listed here:

foo.zone/gemfeed/2024-09-07-projects-i-support.gmi (Gemini)
foo.zone/gemfeed/2024-09-07-projects-i-support.html

Until now, I wasn't aware, that Go is under a ...



Until now, I wasn't aware, that Go is under a BSD-style license (3-clause as it seems). Neat. I don't know why, but I always was under the impression it would be MIT. #bsd #golang

go.dev/LICENSE

These are some book notes from "Staff Engineer" ...



These are some book notes from "Staff Engineer" – there is some really good insight into what is expected from a Staff Engineer and beyond in the industry. I wish I had read the book earlier.

foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.gmi (Gemini)
foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.html

Looking at #Kubernetes, it's pretty much ...



Looking at #Kubernetes, it's pretty much following the Unix way of doing things. It has many tools, but each tool has its own single purpose: DNS, scheduling, container runtime, various controllers, networking, observability, alerting, and more services in the control plane. Everything is managed by different services or plugins, mostly running in their dedicated pods. They don't communicate through pipes, but network sockets, though. #k8s

There has been an outage at the upstream ...



There has been an outage at the upstream network provider for OpenBSD.Amsterdam (hoster, I am using). This was the first real-world test for my KISS HA setup, and it worked flawlessly! All my sites and services failed over automatically to my other #OpenBSD VM!

foo.zone/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.gmi (Gemini)
foo.zone/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html
openbsd.amsterdam/

One of the more confusing parts in Go, nil ...



One of the more confusing parts in Go, nil values vs nil errors: #golang

unexpected-go.com/nil-errors-that-are-non-nil-errors.html

Agreeably, writing down with Diagrams helps you ...



Agreeably, writing down with Diagrams helps you to think things more through. And keeps others on the same page. Only worth for projects from a certain size, IMHO.

ntietz.com/blog/reasons-to-write-design-docs/

I like the idea of types in Ruby. Raku is ...



I like the idea of types in Ruby. Raku is supports that already, but in Ruby, you must specify the types in a separate .rbs file, which is, in my opinion, cumbersome and is a reason not to use it extensively for now. I believe there are efforts to embed the type information in the standard .rb files, and that the .rbs is just an experiment to see how types could work out without introducing changes into the core Ruby language itself right now? #Ruby #RakuLang

github.com/ruby/rbs

So, #Haskell is better suited for general ...



So, #Haskell is better suited for general purpose than #Rust? I thought deploying something in Haskell means publishing an academic paper :-) Interesting rant about Rust, though:

chrisdone.com/posts/rust/

At first, functional options add a bit of ...



At first, functional options add a bit of boilerplate, but they turn out to be quite neat, especially when you have very long parameter lists that need to be made neat and tidy. #golang

www.calhoun.io/using-functional-options-instead-of-method-chaining-in-go/

Revamping my home lab a little bit. #freebsd ...



Revamping my home lab a little bit. #freebsd #bhyve #rocky #linux #vm #k3s #kubernetes #wireguard #zfs #nfs #ha #relayd #k8s #selfhosting #homelab

foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.gmi (Gemini)
foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html

Wondering to which #web #browser I should ...



Wondering to which #web #browser I should switch now personally ...

www.osnews.com/story/141100/mozilla-fo..-..dvocacy-for-open-web-privacy-and-more/

eks-node-viewer is a nifty tool, showing the ...



eks-node-viewer is a nifty tool, showing the compute nodes currently in use in the #EKS cluster. especially useful when dynamically allocating nodes with #karpenter or auto scaling groups.

github.com/awslabs/eks-node-viewer

Have put more Photos on - On my static photo ...



Have put more Photos on - On my static photo sites - Generated with a #bash script

irregular.ninja

In Go, passing pointers are not automatically ...



In Go, passing pointers are not automatically faster than values. Pointers often force the memory to be allocated on the heap, adding GC overhad. With values, Go can determine whether to put the memory on the stack instead. But with large structs/objects (how you want to call them) or if you want to modify state, then pointers are the semantic to use. #golang

blog.boot.dev/golang/pointers-faster-than-values/

Myself being part of an on-call rotations over ...



Myself being part of an on-call rotations over my whole professional life, just have learned this lesson "Tell people who are new to on-call: Just have fun" :-) This is a neat blog post to read:

ntietz.com/blog/what-i-tell-people-new-to-oncall/

Feels good to code in my old love #Perl again ...



Feels good to code in my old love #Perl again after a while. I am implementing a log parser for generating site stats of my personal homepage! :-) @Perl

This is an interactive summary of the Go ...



This is an interactive summary of the Go release, with a lot of examples utilising iterators in the slices and map packages. Love it! #golang

antonz.org/go-1-23/

December 2024



Thats unexpected, you cant remove a NaN key ...



Thats unexpected, you cant remove a NaN key from a map without clearing it! #golang

unexpected-go.com/you-cant-remove-a-nan-key-from-a-map-without-clearing-it.html

My second blog post about revamping my home lab ...



My second blog post about revamping my home lab a little bit just hit the net. #FreeBSD #ZFS #n100 #k8s #k3s #kubernetes

foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.gmi (Gemini)
foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html

Very insightful article about tech hiring in ...



Very insightful article about tech hiring in the age of LLMs. As an interviewer, I have experienced some of the scrnarios already first hand...

newsletter.pragmaticengineer.com/p/how-genai-changes-tech-hiring

for #bpf #ebpf performance debugging, have ...



for #bpf #ebpf performance debugging, have a look at bpftop from Netflix. A neat tool showing you the estimated CPU time and other performance statistics for all the BPF programs currently loaded into the #linux kernel. Highly recommend!

github.com/Netflix/bpftop

89 things he/she knows about Git commits is a ...



89 things he/she knows about Git commits is a neat list of #Git wisdoms

www.jvt.me/posts/2024/07/12/things-know-commits/

I found that working on multiple side projects ...



I found that working on multiple side projects concurrently is better than concentrating on just one. This seems inefficient at first, but whenever you tend to lose motivation, you can temporarily switch to another one with full élan. However, remember to stop starting and start finishing. This doesn't mean you should be working on 10+ (and a growing list of) side projects concurrently! Select your projects and commit to finishing them before starting the next thing. For example, my current limit of concurrent side projects is around five.

Agreed? Agreed. Besides #Ruby, I would also ...



Agreed? Agreed. Besides #Ruby, I would also add #RakuLang and #Perl @Perl to the list of languages that are great for shell scripts - "Making Easy Things Easy and Hard Things Possible"

lucasoshiro.github.io/posts-en/2024-06-17-ruby-shellscript/

Plan9 assembly format in Go, but wait, it's not ...



Plan9 assembly format in Go, but wait, it's not the Operating System Plan9! #golang #rabbithole

www.osnews.com/story/140941/go-plan9-memo-speeding-up-calculations-450/

This is a neat blog post about the Helix text ...



This is a neat blog post about the Helix text editor, to which I personally switched around a year ago (from NeoVim). I should blog about my experience as well. To summarize: I am using it together with the terminal multiplexer #tmux. It doesn't bother me that Helix is purely terminal-based and therefore everything has to be in the same font. #HelixEditor

jonathan-frere.com/posts/helix/

This blog post is basically a rant against ...



This blog post is basically a rant against DataDog... Personally, I don't have much experience with DataDog (actually, I have never used it), but one reason to work with logs at my day job (with over 2,000 physical server machines) and to be cost-effective is by using dtail! #dtail #logs #logmanagement

crys.site/blog/2024/reinventint-the-weel/
dtail.dev

Quick trick to get Helix themes selected ...



Quick trick to get Helix themes selected randomly #HelixEditor

foo.zone/gemfeed/2024-12-15-random-helix-themes.gmi (Gemini)
foo.zone/gemfeed/2024-12-15-random-helix-themes.html

Example where complexity attacks you from ...



Example where complexity attacks you from behind #k8s #kubernetes #OpenAI

surfingcomplexity.blog/2024/12/14/quic..-..ecent-openai-public-incident-write-up/

LLMs for Ops? Summaries of logs, probabilities ...



LLMs for Ops? Summaries of logs, probabilities about correctness, auto-generating Ansible, some uses cases are there. Wouldn't trust it fully, though.

youtu.be/WodaffxVq-E?si=noY0egrfl5izCSQI

Excellent article about your dream Product ...



Excellent article about your dream Product Manager: Why every software team needs a product manager to thrive via @wallabagapp

testdouble.com/insights/why-product-ma..-..s-accelerate-improve-software-delivery

I just finished reading all chapters of CPU ...



I just finished reading all chapters of CPU land: ... not claiming to remember every detail, but it is a great refresher how CPUs and operating systems actually work under the hood when you execute a program, which we tend to forget in our higher abstraction world. I liked the "story" and some of the jokes along the way! Size wise, it is pretty digestable (not talking about books, but only 7 web articles/chapters)! #cpu #linux #unix #kernel #macOS

cpu.land/

Indeed, useful to know this stuff! #sre ...



Indeed, useful to know this stuff! #sre

biriukov.dev/docs/resolver-dual-stack-..-..resolvers-and-dual-stack-applications/

It's the small things, which make Unix like ...



It's the small things, which make Unix like systems, like GNU/Linux, interesting. Didn't know about this #GNU #Tar behaviour yet:

xeiaso.net/notes/2024/pop-quiz-tar/

My New Year's resolution is not to start any ...



My New Year's resolution is not to start any new non-fiction books (or only very few) but to re-read and listen to my favorites, which I read to reflect on and see things from different perspectives. Every time you re-read a book, you gain new insights.<nil>17491

Other related posts:

2025-01-01 Posts from October to December 2024 (You are currently reading this)

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
Random Helix Themes gemini://foo.zone/gemfeed/2024-12-15-random-helix-themes.gmi 2024-12-15T13:55:05+02:00 Paul Buetow aka snonux paul@dev.buetow.org I thought it would be fun to have a random Helix theme every time I open a new shell. Helix is the text editor I use.

Random Helix Themes



Published at 2024-12-15T13:55:05+02:00; Last updated 2024-12-18

I thought it would be fun to have a random Helix theme every time I open a new shell. Helix is the text editor I use.

https://helix-editor.com/

So I put this into my zsh dotfiles (in some editor.zsh.source in my ~ directory):

export EDITOR=hx
export VISUAL=$EDITOR
export GIT_EDITOR=$EDITOR
export HELIX_CONFIG_DIR=$HOME/.config/helix

editor::helix::random_theme () {
    # May add more theme search paths based on OS. This one is
    # for Fedora Linux, but there is also MacOS, etc.
    local -r theme_dir=/usr/share/helix/runtime/themes
    if [ ! -d $theme_dir ]; then
        echo "Helix theme dir $theme_dir doesnt exist"
        return 1
    fi

    local -r config_file=$HELIX_CONFIG_DIR/config.toml
    local -r random_theme="$(basename "$(ls $theme_dir \
        | grep -v random.toml | grep .toml | sort -R \
        | head -n 1)" | cut -d. -f1)"

    sed "/^theme =/ { s/.*/theme = \"$random_theme\"/; }" \
        $config_file > $config_file.tmp && 
        mv $config_file.tmp $config_file
}

if [ -f $HELIX_CONFIG_DIR/config.toml ]; then
    editor::helix::random_theme
fi

So every time I open a new terminal or shell, editor::helix::random_theme gets called, which randomly selects a theme from all installed ones and updates the helix config accordingly.

[paul@earth] ~ % editor::helix::random_theme
[paul@earth] ~ % head -n 1 ~/.config/helix/config.toml
theme = "jellybeans"
[paul@earth] ~ % editor::helix::random_theme
[paul@earth] ~ % head -n 1 ~/.config/helix/config.toml
theme = "rose_pine"
[paul@earth] ~ % editor::helix::random_theme
[paul@earth] ~ % head -n 1 ~/.config/helix/config.toml
theme = "noctis"
[paul@earth] ~ %

A better version



Update 2024-12-18: This is an improved version, which works cross platform (e.g., also on MacOS) and multiple theme directories:

export EDITOR=hx
export VISUAL=$EDITOR
export GIT_EDITOR=$EDITOR
export HELIX_CONFIG_DIR=$HOME/.config/helix

editor::helix::theme::get_random () {
    for dir in $(hx --health \
        | awk '/^Runtime directories/ { print $3 }' | tr ';' ' '); do
        if [ -d $dir/themes ]; then
            ls $dir/themes
        fi
    done | grep -F .toml | sort -R | head -n 1 | cut -d. -f1
}

editor::helix::theme::set () {
    local -r theme="$1"; shift

    local -r config_file=$HELIX_CONFIG_DIR/config.toml

    sed "/^theme =/ { s/.*/theme = \"$theme\"/; }" \
        $config_file > $config_file.tmp && 
        mv $config_file.tmp $config_file
}

if [ -f $HELIX_CONFIG_DIR/config.toml ]; then
    editor::helix::theme::set $(editor::helix::theme::get_random)
fi

I hope you had some fun. E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation gemini://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.gmi 2024-12-02T23:48:21+02:00 Paul Buetow aka snonux paul@dev.buetow.org This is the second blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.

f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation



Published at 2024-12-02T23:48:21+02:00

This is the second blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.

We set the stage last time; this time, we will set up the hardware for this project.

These are all the posts so far:

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network

f3s logo

ChatGPT generated logo..

Let's continue...

Table of Contents




Deciding on the hardware



Note that the OpenBSD VMs included in the f3s setup (which will be used later in this blog series for internet ingress - as you know from the first part of this blog series) are already there. These are virtual machines that I rent at OpenBSD Amsterdam and Hetzner.

https://openbsd.amsterdam
https://hetzner.cloud

This means that the FreeBSD boxes need to be covered, which will later be running k3s in Linux VMs via bhyve hypervisor.

I've been considering whether to use Raspberry Pis or look for alternatives. It turns out that complete N100-based mini-computers aren't much more expensive than Raspberry Pi 5s, and they don't require assembly. Furthermore, I like that they are AMD64 and not ARM-based, which increases compatibility with some applications (e.g., I might want to virtualize Windows (via bhyve) on one of those, though that's out of scope for this blog series).

Not ARM but Intel N100



I needed something compact, efficient, and capable enough to handle the demands of a small-scale Kubernetes cluster and preferably something I don't have to assemble a lot. After researching, I decided on the Beelink S12 Pro with Intel N100 CPUs.

Beelink Mini S12 Pro N100 official page

The Intel N100 CPUs are built on the "Alder Lake-N" architecture. These chips are designed to balance performance and energy efficiency well. With four cores, they're more than capable of running multiple containers, even with moderate workloads. Plus, they consume only around 8W of power (ok, that's more than the Pis...), keeping the electricity bill low enough and the setup quiet - perfect for 24/7 operation.

Beelink preparation

The Beelink comes with the following specs:

  • 12th Gen Intel N100 processor, with four cores and four threads, and a maximum frequency of up to 3.4 GHz.
  • 16 GB of DDR4 RAM, with a maximum (official) size of 16 GB (but people could install 32 GB on it).
  • 500 GB M.2 SSD, with the option to install a 2nd 2.5 SSD drive (which I want to make use of later in this blog series).
  • GBit ethernet
  • Four USB 3.2 Gen2 ports (maybe I want to mount something externally at some point)
  • Dimensions and weight: 115*102*39mm, 280g
  • Silent cooling system.
  • HDMI output (needed only for the initial installation and maybe for troubleshooting later)
  • Auto power on via WoL (may make use of it)
  • Wi-Fi (not going to use it)

I bought three (3) of them for the cluster I intend to build.



Unboxing was uneventful. Every Beelink PC came with:

  • An AC power adapter
  • An HDMI cable
  • A VESA mount with screws (not using it as of now)
  • Some manuals
  • The pre-assembled Beelink PC itself.
  • A "Hello" post card (??)

Overall, I love the small form factor.

Network switch



I went with the tp-link mini 5-port switch, as I had a spare one available. That switch will be plugged into my wall ethernet port, which connects directly to my fiber internet router with 100 Mbit/s down and 50 Mbit/s upload speed.

Switch

Installing FreeBSD



Base install



First, I downloaded the boot-only ISO of the latest FreeBSD release and dumped it on a USB stick via my Fedora laptop:

[paul@earth]~/Downloads% sudo dd \
  if=FreeBSD-14.1-RELEASE-amd64-bootonly.iso \
  of=/dev/sda conv=sync

Next, I plugged the Beelinks (one after another) into my monitor via HDMI (the resolution of the FreeBSD text console seems strangely stretched, as I am using the LG Dual Up monitor), connected Ethernet, an external USB keyboard, and the FreeBSD USB stick, and booted the devices up. With F7, I entered the boot menu and selected the USB stick for the FreeBSD installation.

The installation was uneventful. I selected:

  • Guided ZFS on root (pool zroot)
  • Unencrypted ZFS (I will encrypt separate datasets later; I want it to be able to boot without manual interaction)
  • Static IP configuration (to ensure that the boxes always have the same IPs, even after switching the router/DHCP server)
  • I decided to enable the SSH daemon, NTP server, and NTP time synchronization at boot, and I also enabled powerd for automatic CPU frequency scaling.
  • In addition to root, I added a personal user, paul, whom I placed in the wheel group.

After doing all that three times (once for each Beelink PC), I had three ready-to-use FreeBSD boxes! Their hostnames are f0, f1 and f2!

Beelink installation

Latest patch level and customizing /etc/hosts



After the first boot, I upgraded to the latest FreeBSD patch level as follows:

root@f0:~ # freebsd-update fetch
root@f0:~ # freebsd-update install
root@f0:~ # freebsd-update reboot

I also added the following entries for the three FreeBSD boxes to the /etc/hosts file:

root@f0:~ # cat <<END >>/etc/hosts
192.168.1.130 f0 f0.lan f0.lan.buetow.org
192.168.1.131 f1 f1.lan f1.lan.buetow.org
192.168.1.132 f2 f2.lan f2.lan.buetow.org
END

You might wonder why bother using the hosts file? Why not use DNS properly? The reason is simplicity. I don't manage 100 hosts, only a few here and there. Having an OpenWRT router in my home, I could also configure everything there, but maybe I'll do that later. For now, keep it simple and straightforward.

After install



After that, I installed the following additional packages:

root@f0:~ # pkg install helix doas zfs-periodic uptimed

Helix editor



Helix? It's my favourite text editor. I have nothing against vi but like hx (Helix) more!

https://helix-editor.com/

doas



doas? It's a pretty neat (and KISS) replacement for sudo. It has far fewer features than sudo, which is supposed to make it more secure. Its origin is the OpenBSD project. For doas, I accepted the default configuration (where users in the wheel group are allowed to run commands as root):

root@f0:~ # cp /usr/local/etc/doas.conf.sample /usr/local/etc/doas.conf

https://man.openbsd.org/doas

Periodic ZFS snapshotting



zfs-periodic is a nifty tool for automatically creating ZFS snapshots. I decided to go with the following configuration here:

root@f0:~ # cat <<END >>/etc/periodic.conf
daily_zfs_snapshot_enable="YES"
daily_zfs_snapshot_pools="zroot"
daily_zfs_snapshot_keep="7"
weekly_zfs_snapshot_enable="YES"
weekly_zfs_snapshot_pools="zroot"
weekly_zfs_snapshot_keep="5"
monthly_zfs_snapshot_enable="YES"
monthly_zfs_snapshot_pools="zroot"
monthly_zfs_snapshot_keep="6"
END

https://github.com/ross/zfs-periodic

Uptime tracking



uptimed? I like to track my uptimes. This is how I configured the daemon:

root@f0:~ # cp /usr/local/mimecast/etc/uptimed.conf-dist \
  /usr/local/mimecast/etc/uptimed.conf 
root@f0:~ # hx /usr/local/mimecast/etc/uptimed.conf

In the Helix editor session, I changed LOG_MAXIMUM_ENTRIES to 0 to keep all uptime entries forever and not cut off at 50 (the default config). After that, I enabled and started uptimed:

root@f0:~ # service uptimed enable
root@f0:~ # service uptimed start

To check the current uptime stats, I can now run uprecords:

 root@f0:~ # uprecords
     #               Uptime | System                                     Boot up
----------------------------+---------------------------------------------------
->   1     0 days, 00:07:34 | FreeBSD 14.1-RELEASE      Mon Dec  2 12:21:44 2024
----------------------------+---------------------------------------------------
NewRec     0 days, 00:07:33 | since                     Mon Dec  2 12:21:44 2024
    up     0 days, 00:07:34 | since                     Mon Dec  2 12:21:44 2024
  down     0 days, 00:00:00 | since                     Mon Dec  2 12:21:44 2024
   %up              100.000 | since                     Mon Dec  2 12:21:44 2024

This is how I track the uptimes for all of my host:

Unveiling guprecords.raku: Global Uptime Records with Raku-
https://github.com/rpodgorny/uptimed

Hardware check



Ethernet



Works. Nothing eventful, really. It's a cheap Realtek chip, but it will do what it is supposed to do.

paul@f0:~ % ifconfig re0
re0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
        options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>
        ether e8:ff:1e:d7:1c:ac
        inet 192.168.1.130 netmask 0xffffff00 broadcast 192.168.1.255
        inet6 fe80::eaff:1eff:fed7:1cac%re0 prefixlen 64 scopeid 0x1
        inet6 fd22:c702:acb7:0:eaff:1eff:fed7:1cac prefixlen 64 detached autoconf
        inet6 2a01:5a8:304:1d5c:eaff:1eff:fed7:1cac prefixlen 64 autoconf pltime 10800 vltime 14400
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>

RAM



All there:

paul@f0:~ % sysctl hw.physmem
hw.physmem: 16902905856


CPUs



They work:

paul@f0:~ % sysctl dev.cpu | grep freq:
dev.cpu.3.freq: 705
dev.cpu.2.freq: 705
dev.cpu.1.freq: 604
dev.cpu.0.freq: 604

CPU throttling



With powerd running, CPU freq is dowthrottled when the box isn't jam-packed. To stress it a bit, I run ubench to see the frequencies being unthrottled again:

paul@f0:~ % doas pkg install ubench
paul@f0:~ % rehash # For tcsh to find the newly installed command
paul@f0:~ % ubench &
paul@f0:~ % sysctl dev.cpu | grep freq:
dev.cpu.3.freq: 2922
dev.cpu.2.freq: 2922
dev.cpu.1.freq: 2923
dev.cpu.0.freq: 2922

Idle, all three Beelinks plus the switch consumed 26.2W. But with ubench stressing all the CPUs, it went up to 38.8W.

Idle consumption.

Conclusion



The Beelink S12 Pro with Intel N100 CPUs checks all the boxes for a k3s project: Compact, efficient, expandable, and affordable. Its compatibility with both Linux and FreeBSD makes it versatile for other use cases, whether as part of your cluster or as a standalone system. If you’re looking for hardware that punches above its weight for Kubernetes, this little device deserves a spot on your shortlist.

Beelinks stacked

To ease cable management, I need to get shorter ethernet cables. I will place the tower on my shelf, where most of the cables will be hidden (together with a UPS, which will also be added to the setup).

Read the next post of this series:

f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts

Other *BSD-related posts:

2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-04-01 KISS high-availability with OpenBSD
2024-01-13 One reason why I love OpenBSD
2022-10-30 Installing DTail on OpenBSD
2022-07-30 Let's Encrypt with OpenBSD and Rex
2016-04-09 Jails and ZFS with Puppet on FreeBSD

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
f3s: Kubernetes with FreeBSD - Part 1: Setting the stage gemini://foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.gmi 2024-11-16T23:20:14+02:00 Paul Buetow aka snonux paul@dev.buetow.org This is the first blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.

f3s: Kubernetes with FreeBSD - Part 1: Setting the stage



Published at 2024-11-16T23:20:14+02:00

This is the first blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.

I will post a new entry every month or so (there are too many other side projects for more frequent updates—I bet you can understand).

These are all the posts so far:

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network

f3s logo

ChatGPT generated logo..

Let's begin...

Table of Contents




Why this setup?



My previous setup was great for learning Terraform and AWS, but it is too expensive. Costs are under control there, but only because I am shutting down all containers after use (so they are offline ninety percent of the time and still cost around $20 monthly). With the new setup, I could run all containers 24/7 at home, which would still be cheaper in terms of electricity consumption. I have a 400 MBit/s uplink (I could have more if I wanted, but it is more than plenty for my use case already).

From babylon5.buetow.org to .cloud

Migrating off all my containers from AWS ECS means I need a reliable and scalable environment to host my workloads. I wanted something:

  • To self-host all my open-source apps (Docker containers).
  • Fully under my control (goodbye cloud vendor lock-in).
  • Secure and redundant.
  • Cost-efficient (after the initial hardware investment).
  • Something I can poke around with and also pick up new skills.

The infrastructure



This is still in progress, and I need to own the hardware. But in this first part of the blog series, I will outline what I intend to do.

Diagram

Physical FreeBSD nodes and Linux VMs



The setup starts with three physical FreeBSD nodes deployed into my home LAN. On these, I'm going to run Rocky Linux virtual machines with bhyve. Why Linux VMs in FreeBSD and not Linux directly? I want to leverage the great ZFS integration in FreeBSD (among other features), and I have been using FreeBSD for a while in my home lab. And with bhyve, there is a very performant hypervisor available which makes the Linux VMs de-facto run at native speed (another use case of mine would be maybe running a Windows bhyve VM on one of the nodes - but out of scope for this blog series).

https://www.freebsd.org/
https://wiki.freebsd.org/bhyve

I selected Rocky Linux because it comes with long-term support (I don't want to upgrade the VMs every 6 months). Rocky Linux 9 will reach its end of life in 2032, which is plenty of time! Of course, there will be minor upgrades, but nothing will significantly break my setup.

https://rockylinux.org/
https://wiki.rockylinux.org/rocky/version/

Furthermore, I am already using "RHEL-family" related distros at work and Fedora on my main personal laptop. Rocky Linux belongs to the same type of Linux distribution family, so I already feel at home here. I also used Rocky 9 before I switched to AWS ECS. Now, I am switching back in one sense or another ;-)

Kubernetes with k3s



These Linux VMs form a three-node k3s Kubernetes cluster, where my containers will reside moving forward. The 3-node k3s cluster will be highly available (in etcd mode), and all apps will probably be deployed with Helm. Prometheus will also be running in k3s, collecting time-series metrics and handling monitoring. Additionally, a private Docker registry will be deployed into the k3s cluster, where I will store some of my self-created Docker images. k3s is the perfect distribution of Kubernetes for homelabbers due to its simplicity and the inclusion of the most useful features out of the box!

https://k3s.io/

HA volumes for k3s with HAST/ZFS and NFS



Persistent storage for the k3s cluster will be handled by highly available (HA) NFS shares backed by ZFS on the FreeBSD hosts.

On two of the three physical FreeBSD nodes, I will add a second SSD drive to each and dedicate it to a zhast ZFS pool. With HAST (FreeBSD's solution for highly available storage), this pool will be replicated at the byte level to a standby node.

A virtual IP (VIP) will point to the master node. When the master node goes down, the VIP will failover to the standby node, where the ZFS pool will be mounted. An NFS server will listen to both nodes. k3s will use the VIP to access the NFS shares.

FreeBSD Wiki: Highly Available Storage

You can think of DRBD being the Linux equivalent to FreeBSD's HAST.

OpenBSD/relayd to the rescue for external connectivity



All apps should be reachable through the internet (e.g., from my phone or computer when travelling). For external connectivity and TLS management, I've got two OpenBSD VMs (one hosted by OpenBSD Amsterdam and another hosted by Hetzner) handling public-facing services like DNS, relaying traffic, and automating Let's Encrypt certificates.

All of this (every Linux VM to every OpenBSD box) will be connected via WireGuard tunnels, keeping everything private and secure. There will be 6 WireGuard tunnels (3 k3s nodes times two OpenBSD VMs).

https://en.wikipedia.org/wiki/WireGuard

So, when I want to access a service running in k3s, I will hit an external DNS endpoint (with the authoritative DNS servers being the OpenBSD boxes). The DNS will resolve to the master OpenBSD VM (see my KISS highly-available with OpenBSD blog post), and from there, the relayd process (with a Let's Encrypt certificate—see my Let's Encrypt with OpenBSD and Rex blog post) will accept the TCP connection and forward it through the WireGuard tunnel to a reachable node port of one of the k3s nodes, thus serving the traffic.

KISS high-availability with OpenBSD
Let's Encrypt with OpenBSD and Rex

The OpenBSD setup described here already exists and is ready to use. The only thing that does not yet exist is the configuration of relayd to forward requests to k3s through the WireGuard tunnel(s).

Data integrity



Periodic backups



Let's face it, backups are non-negotiable.

On the HAST master node, incremental and encrypted ZFS snapshots are created daily and automatically backed up to AWS S3 Glacier Deep Archive via CRON. I have a bunch of scripts already available, which I currently use for a similar purpose on my FreeBSD Home NAS server (an old ThinkPad T440 with an external USB drive enclosure, which I will eventually retire when the HAST setup is ready). I will copy them and slightly modify them to fit the purpose.

There's also zfstools in the ports, which helps set up an automatic snapshot regime:

https://www.freshports.org/sysutils/zfstools

The backup scripts also perform some zpool scrubbing now and then. A scrub once in a while keeps the trouble away.

Power protection



Power outages are regularly in my area, so a UPS keeps the infrastructure running during short outages and protects the hardware. I'm still trying to decide which hardware to get, and I still need one, as my previous NAS is simply an older laptop that already has a battery for power outages. However, there are plenty of options to choose from. My main criterion is that the UPS should be silent, as the whole setup will be installed in an upper shelf unit in my daughter's room. ;-)

Monitoring: Keeping an eye on everything



Robust monitoring is vital to any infrastructure, especially one as distributed as mine. I've thought about a setup that ensures I'll always be aware of what's happening in my environment.

Prometheus and Grafana



Inside the k3s cluster, Prometheus will be deployed to handle metrics collection. It will be configured to scrape data from my Kubernetes workloads, nodes, and any services I monitor. Prometheus also integrates with Alertmanager to generate alerts based on predefined thresholds or conditions.

https://prometheus.io

For visualization, Grafana will be deployed alongside Prometheus. Grafana lets me build dynamic, customizable dashboards that provide a real-time view of everything from resource utilization to application performance. Whether it's keeping track of CPU load, memory usage, or the health of Kubernetes pods, Grafana has it covered. This will also make troubleshooting easier, as I can quickly pinpoint where issues are arising.

https://grafana.com

Gogios: My custom alerting system



Alerts generated by Prometheus are forwarded to Alertmanager, which I will configure to work with Gogios, a lightweight monitoring and alerting system I wrote myself. Gogios runs on one of my OpenBSD VMs. At regular intervals, Gogios scrapes the alerts generated in the k3s cluster and notifies me via Email.

KISS server monitoring with Gogios

Ironically, I implemented Gogios to avoid using more complex alerting systems like Prometheus, but here we go—it integrates well now.

Conclusion



This setup may be just the beginning. Some ideas I'm thinking about for the future:

  • Adding more FreeBSD nodes (in different physical locations, maybe at my wider family's places? WireGuard would make it possible!) for better redundancy. (HA storage then might be trickier)
  • Deploying more Docker apps (data-intensive ones, like a picture gallery, my entire audiobook catalogue, or even a music server) to k3s.

For now, though, I'm focused on completing the migration from AWS ECS and getting all my Docker containers running smoothly in k3s.

What's your take on self-hosting? Are you planning to move away from managed cloud services? Stay tuned for the second part of this series, where I will likely write about the hardware and the OS setups.

Read the next post of this series:

f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

Other *BSD-related posts:

2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)
2024-04-01 KISS high-availability with OpenBSD
2024-01-13 One reason why I love OpenBSD
2022-10-30 Installing DTail on OpenBSD
2022-07-30 Let's Encrypt with OpenBSD and Rex
2016-04-09 Jails and ZFS with Puppet on FreeBSD

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
'Staff Engineer' book notes gemini://foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.gmi 2024-10-24T20:57:44+03:00 Paul Buetow aka snonux paul@dev.buetow.org These are my personal takeaways after reading 'Staff Engineer' by Will Larson. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

"Staff Engineer" book notes



Published at 2024-10-24T20:57:44+03:00

These are my personal takeaways after reading "Staff Engineer" by Will Larson. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

         ,..........   ..........,
     ,..,'          '.'          ',..,
    ,' ,'            :            ', ',
   ,' ,'             :             ', ',
  ,' ,'              :              ', ',
 ,' ,'............., : ,.............', ',
,'  '............   '.'   ............'  ',
 '''''''''''''''''';''';''''''''''''''''''
                    '''

Table of Contents




The Four Archetypes of a Staff Engineer



Larson breaks down the role of a Staff Engineer into four main archetypes, which can help frame how you approach the role:

  • Tech Lead: Focuses on the technical direction of a team, ensuring high-quality execution, architecture, and aligning the team around shared goals.
  • Solver: Gets pulled into complex, high-impact problems that often involve many teams or systems, operating as a fixer or troubleshooter.
  • Architect: Works on the long-term technical vision for an organization, setting standards and designing systems that will scale and last over time.
  • Right Hand: Functions as a trusted technical advisor to leadership, providing input on strategy, long-term decisions, and navigating organizational politics.

Influence and Impact over Authority



As a Staff Engineer, influence is often more important than formal authority. You’ll rarely have direct control over teams or projects but will need to drive outcomes by influencing peers, other teams, and leadership. It’s about understanding how to persuade, align, and mentor others to achieve technical outcomes.

Breadth and Depth of Knowledge



Staff Engineers often need to maintain a breadth of knowledge across various areas while maintaining depth in a few. This can mean keeping a high-level understanding of several domains (e.g., infrastructure, security, product development) but being able to dive deep when needed in certain core areas.

Mentorship and Sponsorship



An important part of a Staff Engineer’s role is mentoring others, not just in technical matters but in career development as well. Sponsorship goes a step beyond mentorship, where you actively advocate for others, create opportunities for them, and push them toward growth.

Managing Up and Across



Success as a Staff Engineer often depends on managing up (influencing leadership and setting expectations) and managing across (working effectively with peers and other teams). This is often tied to communication skills, the ability to advocate for technical needs, and fostering alignment across departments or organizations.

Strategic Thinking



While Senior Engineers may focus on execution, Staff Engineers are expected to think strategically, making decisions that will affect the company or product months or years down the line. This means balancing short-term execution needs with long-term architectural decisions, which may require challenging short-term pressures.

Emotional Intelligence



The higher you go in engineering roles, the more soft skills, particularly emotional intelligence (EQ), come into play. Building relationships, resolving conflicts, and understanding the broader emotional dynamics of the team and organization become key parts of your role.



Staff Engineers are often placed in situations with high ambiguity—whether in defining the problem space, coming up with a solution, or aligning stakeholders. The ability to operate effectively in these unclear areas is critical to success.

Visible and Invisible Work



Much of the work done by Staff Engineers is invisible. Solving complex problems, creating alignment, or influencing decisions doesn’t always result in tangible code, but it can have a massive impact. Larson emphasizes that part of the role is being comfortable with this type of invisible contribution.

Scaling Yourself



At the Staff Engineer level, you must scale your impact beyond direct contribution. This can involve improving documentation, developing repeatable processes, mentoring others, or automating parts of the workflow. The idea is to enable teams and individuals to be more effective, even when you’re not directly involved.

Career Progression and Title Inflation



Larson touches on how different companies have varying definitions of "Staff Engineer," and titles don’t always correlate directly with responsibility or skill. He emphasizes the importance of focusing more on the work you're doing and the impact you're having, rather than the title itself.

These additional points reflect more of the strategic, interpersonal, and leadership aspects that go beyond the technical expertise expected at this level. The role of a Staff Engineer is often about balancing high-level strategy with technical execution, while influencing teams and projects in a sustainable, long-term way.

Not a faster Senior Engineer



  • A Staff engineer is more than just a faster Senior.
  • A staff engineer is not a senior engineer but a bit better.

It's important to know what work or which role most energizes you. A Staff engineer is not a more senior engineer. A Staff engineer also fits into another archetype.

As a staff engineer, you are always expected to go beyond your comfort zone and learn new things.

Your job sometimes will feel like an SEM and sometimes strangely similar to your senior roles.

A Staff engineer is, like a Manager, a leader. However, being a Manager is a specific job. Leaders can apply to any job, especially to Staff engineers.

The Balance



The more senior you become, the more responsibility you will have to cope with them in less time. Balance your speed of progress with your personal life, don't work late hours and don't skip these personal care events.

Do fewer things but do them better. Everything done will accelerate the organization. Everything else will drag it down—quality over quantity.

Don't work at ten things and progress slowly; focus on one thing and finish it.

Only spend some of the time firefighting. Have time for deep thinking. Only deep think some of the time. Otherwise, you lose touch with reality.

Sebactical: Take at least six months. Otherwise, it won't be as restored.

More things



  • Provide simple but widely used tools. Complex and powerful tools will have power users but only a very few. All others will not use the tool.
  • In meetings, when someone is inactive, try to pull him in. Pull in max one person at a time. Don't open the discussion to multiple people.
  • Get used to writing things down and repeating yourself. You will scale yourself much more.
  • Title inflation: skills correspond to work, but the titles don't.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2025-06-07 "A Monk's Guide to Happiness" book notes
2025-04-19 "When: The Scientific Secrets of Perfect Timing" book notes
2024-10-24 "Staff Engineer" book notes (You are currently reading this)
2024-07-07 "The Stoic Challenge" book notes
2024-05-01 "Slow Productivity" book notes
2023-11-11 "Mind Management" book notes
2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes
2023-05-06 "The Obstacle is the Way" book notes
2023-04-01 "Never split the difference" book notes
2023-03-16 "The Pragmatic Programmer" book notes

Back to the main site
Gemtexter 3.0.0 - Let's Gemtext again⁴ gemini://foo.zone/gemfeed/2024-10-02-gemtexter-3.0.0-lets-gemtext-again-4.gmi 2024-10-01T21:46:26+03:00 Paul Buetow aka snonux paul@dev.buetow.org I proudly announce that I've released Gemtexter version `3.0.0`. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown, written in GNU Bash.

Gemtexter 3.0.0 - Let's Gemtext again⁴



Published at 2024-10-01T21:46:26+03:00

I proudly announce that I've released Gemtexter version 3.0.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown, written in GNU Bash.

https://codeberg.org/snonux/gemtexter

-=[ typewriters ]=-  1/98
                                      .-------.
       .-------.                     _|~~ ~~  |_
      _|~~ ~~  |_       .-------.  =(_|_______|_)
    =(_|_______|_)=    _|~~ ~~  |_   |:::::::::|    .-------.
      |:::::::::|    =(_|_______|_)  |:::::::[]|   _|~~ ~~  |_
      |:::::::[]|      |:::::::::|   |o=======.| =(_|_______|_)
      |o=======.|      |:::::::[]|   `"""""""""`   |:::::::::|
 jgs  `"""""""""`      |o=======.|                 |:::::::[]|
  mod. by Paul Buetow  `"""""""""`                 |o=======.|
                                                   `"""""""""`

Table of Contents




Why Bash?



This project is too complex for a Bash script. Writing it in Bash was to try out how maintainable a "larger" Bash script could be. It's still pretty maintainable and helps me try new Bash tricks here and then!

Let's list what's new!

HTML exact variant is the only variant



The last version of Gemtexter introduced the HTML exact variant, which wasn't enabled by default. This version of Gemtexter removes the previous (inexact) variant and makes the exact variant the default. This is a breaking change, which is why there is a major version bump of Gemtexter. Here is a reminder of what the exact variant was:

Gemtexter is there to convert your Gemini Capsule into other formats, such as HTML and Markdown. An HTML exact variant can now be enabled in the gemtexter.conf by adding the line declare -rx HTML_VARIANT=exact. The HTML/CSS output changed to reflect a more exact Gemtext appearance and to respect the same spacing as you would see in the Geminispace.

Table of Contents auto-generation



Just add...

 << template::inline::toc

...into a Gemtexter template file and Gemtexter will automatically generate a table of contents for the page based on the headings (see this page's ToC for example). The ToC will also have links to the relevant sections in HTML and Markdown output. The Gemtext format does not support links, so the ToC will simply be displayed as a bullet list.

Configurable themes



It was always possible to customize the style of a Gemtexter's resulting HTML page, but all the config options were scattered across multiple files. Now, the CSS style, web fonts, etc., are all configurable via themes.

Simply configure HTML_THEME_DIR in the gemtexter.conf file to the corresponding directory. For example:

declare -xr HTML_THEME_DIR=./extras/html/themes/simple

To customize the theme or create your own, simply copy the theme directory and modify it as needed. This makes it also much easier to switch between layouts.

No use of webfonts by default



The default theme is now "back to the basics" and does not utilize any web fonts. The previous themes are still part of the release and can be easily configured. These are currently the future and business themes. You can check them out from the themes directory.

More



Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2024-10-02 Gemtexter 3.0.0 - Let's Gemtext again⁴ (You are currently reading this)
2023-07-21 Gemtexter 2.1.0 - Let's Gemtext again³
2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again²
2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again
2021-06-05 Gemtexter - One Bash script to rule it all
2021-04-24 Welcome to the Geminispace

Back to the main site
Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers gemini://foo.zone/gemfeed/2024-09-07-site-reliability-engineering-part-4.gmi 2024-09-07T16:27:58+03:00 Paul Buetow aka snonux paul@dev.buetow.org Welcome to Part 4 of my Site Reliability Engineering (SRE) series. I'm currently working as a Site Reliability Engineer, and I’m here to share what SRE is all about in this blog series.

Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers



Published at 2024-09-07T16:27:58+03:00

Welcome to Part 4 of my Site Reliability Engineering (SRE) series. I'm currently working as a Site Reliability Engineer, and I’m here to share what SRE is all about in this blog series.

2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture
2023-11-19 Site Reliability Engineering - Part 2: Operational Balance
2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture
2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers (You are currently reading this)

       __..._   _...__
  _..-"      `Y`      "-._
  \ Once upon |           /
  \\  a time..|          //
  \\\         |         ///
   \\\ _..---.|.---.._ ///
jgs \\`_..---.Y.---.._`//	

This time, I want to share some tips on how to onboard software engineers, QA engineers, and Site Reliability Engineers (SREs) to the primary on-call rotation. Traditionally, onboarding might take half a year (depending on the complexity of the infrastructure), but with a bit of strategy and structured sessions, we've managed to reduce it to just six weeks per person. Let's dive in!

Setting the Scene: Tier-1 On-Call Rotation



First things first, let's talk about Tier-1. This is where the magic begins. Tier-1 covers over 80% of the common on-call cases and is the perfect breeding ground for new on-call engineers to get their feet wet. It's designed to be manageable training ground.

Why Tier-1?



  • Easy to Understand: Every on-call engineer should be familiar with Tier-1 tasks.
  • Training Ground: This is where engineers start their on-call career. It's purposefully kept simple so that it's not overwhelming right off the bat.
  • Runbook/recipe driven: Every alert is attached to a comprehensive runbook, making it easy for every engineer to follow.

Onboarding Process: From 6 Months to 6 Weeks



So how did we cut down the onboarding time so drastically? Here’s the breakdown of our process:

Knowledge Transfer (KT) Sessions: We kicked things off with more than 10 KT sessions, complete with video recordings. These sessions are comprehensive and cover everything from the basics to some more advanced topics. The recorded sessions mean that new engineers can revisit them anytime they need a refresher.

Shadowing Sessions: Each new engineer undergoes two on-call week shadowing sessions. This hands-on experience is invaluable. They get to see real-time incident handling and resolution, gaining practical knowledge that's hard to get from just reading docs.

Comprehensive Runbooks: We created 64 runbooks (by the time writing this probably more than 100) that are composable like Lego bricks. Each runbook covers a specific scenario and guides the engineer step-by-step to resolution. Pairing these with monitoring alerts linked directly to Confluence docs, and from there to the respective runbooks, ensures every alert can be navigated with ease (well, there are always exceptions to the rule...).

Self-Sufficiency & Confidence Building: With all these resources at their fingertips, our on-call engineers become self-sufficient for most of the common issues they'll face (new starters can now handle around 80% of the most common issue after 6 weeks they had joined the company). This boosts their confidence and ensures they can handle Tier-1 incidents independently.

Documentation and Feedback Loop: Continuous improvement is key. We regularly update our documentation based on feedback from the engineers. This makes our process even more robust and user-friendly.

It's All About the Tiers



Let’s briefly touch on the Tier levels:

  • Tier 1: Easy and foundational tasks. Perfect for getting new engineers started. This covers around 80% of all on-call cases we face. This is what we trained on.
  • Tier 2: Slightly more complex, requiring more background knowledge. We trained on some of the topics but not all.
  • Tier 3: Requires a good understanding of the platform/architecture. Likely needs KT sessions with domain experts.
  • Tier DE (Domain Expert): The heavy hitters. Domain experts are required for these tasks.

Growing into Higher Tiers



From Tier-1, engineers naturally grow into Tier-2 and beyond. The structured training and gradual increase in complexity help ensure a smooth transition as they gain experience and confidence. The key here is that engineers stay curous and engaged in the on-call, so that they always keep learning.

Keeping Runbooks Up to Date



It is important that runbooks are not a "project to be finished"; runbooks have to be maintained and updated over time. Sections may change, new runbooks need to be added, and old ones can be deleted. So the acceptance criteria of an on-call shift would not just be reacting to alerts and incidents, but also reviewing and updating the current runbooks.

Conclusion



By structuring the onboarding process with KT sessions, shadowing, comprehensive runbooks, and a feedback loop, we've been able to fast-track the process from six months to just six weeks. This not only prepares our engineers for the on-call rotation quicker but also ensures they're confident and capable when handling incidents.

If you're looking to optimize your on-call onboarding process, these strategies could be your ticket to a more efficient and effective transition. Happy on-calling!

Back to the main site
Projects I financially support gemini://foo.zone/gemfeed/2024-09-07-projects-i-support.gmi 2024-09-07T16:04:19+03:00 Paul Buetow aka snonux paul@dev.buetow.org This is the list of projects and initiatives I support/sponsor.

Projects I financially support



Published at 2024-09-07T16:04:19+03:00

This is the list of projects and initiatives I support/sponsor.

||====================================================================||
||//$\\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\//$\\||
||(100)==================| FEDERAL SPONSOR NOTE |================(100)||
||\\$//        ~         '------========--------'                \\$//||
||<< /        /$\              // ____ \\                         \ >>||
||>>|  12    //L\\            // ///..) \\         L38036133B   12 |<<||
||<<|        \\ //           || <||  >\  ||                        |>>||
||>>|         \$/            ||  $$ --/  ||        One Hundred     |<<||
||<<|      L38036133B        *\\  |\_/  //* series                 |>>||
||>>|  12                     *\\/___\_//*   1989                  |<<||
||<<\      Open Source   ______/Franklin\________     Supporting   />>||
||//$\                 ~| SPONSORING AND FUNDING |~               /$\\||
||(100)===================  AWESOME OPEN SOURCE =================(100)||
||\\$//\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\\$//||
||====================================================================||
 

Table of Contents




Motivation



Sponsoring free and open-source projects, even for personal use, is important to ensure the sustainability, security, and continuous improvement of the software. It supports developers who often maintain these projects without compensation, helping them provide updates, new features, and security patches. By contributing, you recognize their efforts, foster a culture of innovation, and benefit from perks like early access or support, all while ensuring the long-term viability of the tools you rely on.

Albeit I am not putting a lot of money into my sponsoring efforts, it still helps the open-source maintainers because the more little sponsors there are, the higher the total sum.

OSnews



I am a silver Patreon member of OSnews. I have been following this site since my student years. It's always been a great source of independent and slightly alternative IT news.

https://osnews.com

Cup o' Go Podcast



I am a Patreon of the Cup o' Go Podcast. The podcast helps me stay updated with the Go community for around 15 minutes per week. I am not a full-time software developer, but my long-term ambition is to become better in Go every week by working on personal projects and tools for work.

https://cupogo.dev

Codeberg



Codeberg e.V. is a nonprofit organization that provides online resources for software development and collaboration. I am a user and a supporting member, paying an annual membership of €24. I didn't have to pay that membership fee, as Codeberg offers all the services I use for free.

https://codeberg.org
https://codeberg.org/snonux - My Codeberg page

GrapheneOS



GrapheneOS is an open-source project that improves Android's privacy and security with sandboxing, exploit mitigations, and a permission model. It does not include Google apps or services but offers a sandboxed Google Play compatibility layer and its own apps and services.

I've made a one-off €100 donation because I really like this, and I run GrapheneOS on my personal Phone as my main daily driver.

https://grapheneos.org/
Why GrapheneOS Rox

AnkiDroid



AnkiDroid is an app that lets you learn flashcards efficiently with spaced repetition. It is compatible with Anki software and supports various flashcard content, syncing, statistics, and more.

I've been learning vocabulary with this free app, and it is, in my opinion, the best flashcard app I know. I've made a 20$ one-off donation to this project.

https://opencollective.com/ankidroid

OpenBSD through OpenBSD.Amsterdam



The OpenBSD project produces a FREE, multi-platform 4.4BSD-based UNIX-like operating system. Our efforts emphasize portability, standardization, correctness, proactive security and integrated cryptography. As an example of the effect OpenBSD has, the popular OpenSSH software comes from OpenBSD. OpenBSD is freely available from their download sites.

I implicitly support the OpenBSD project through a VM I have rented at OpenBSD Amsterdam. They donate €10 per VM and €15 per VM for every renewal to the OpenBSD Foundation, with dedicated servers running vmm(4)/vmd(8) to host opinionated VMs.

https://www.OpenBSD.org
https://OpenBSD.Amsterdam

ProtonMail



I am not directly funding this project, but I am a very happy paying customer, and I am listing it here as an alternative to big tech if you don't want to run your own mail infrastructure. I am listing ProtonMail here as it is a non-profit organization, and I want to emphasize the importance of considering alternatives to big tech.

https://proton.me/

Libro.fm



This is the alternative to Audible if you are into audiobooks (like I am). For every book or every month of membership, I am also supporting a local bookstore I selected. Their catalog is not as large as Audible's, but it's still pretty decent.

Libro.fm began as a conversation among friends at Third Place Books, a local bookstore in Seattle, Washington, about the growing popularity of audiobooks and the lack of a way for readers to purchase them from independent bookstores. Flash forward, and Libro.fm was founded in 2014.

https://libro.fm

E-mail your comments to paul@nospam.buetow.org :-)

Back to the main site
Typing `127.1` words per minute (`>100wpm average`) gemini://foo.zone/gemfeed/2024-08-05-typing-127.1-words-per-minute.gmi 2024-08-05T17:39:30+03:00 Paul Buetow aka snonux paul@dev.buetow.org After work one day, I noticed some discomfort in my right wrist. Upon research, it appeared to be a mild case of Repetitive Strain Injury (RSI). Initially, I thought that this would go away after a while, but after a week it became even worse. This led me to consider potential causes such as poor posture or keyboard use habits. As an enthusiast of keyboards, I experimented with ergonomic concave ortholinear split keyboards. Wait, what?...

Typing 127.1 words per minute (>100wpm average)



Published at 2024-08-05T17:39:30+03:00; Updated at 2025-02-22

,---,---,---,---,---,---,---,---,---,---,---,---,---,-------,
|1/2| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 0 | + | ' | <-    |
|---'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-----|
| ->| | Q | W | E | R | T | Y | U | I | O | P | ] | ^ |     |
|-----',--',--',--',--',--',--',--',--',--',--',--',--'|    |
| Caps | A | S | D | F | G | H | J | K | L | \ | [ | * |    |
|----,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'---'----|
|    | < | Z | X | C | V | B | N | M | , | . | - |          |
|----'-,-',--'--,'---'---'---'---'---'---'-,-'---',--,------|
| ctrl |  | alt |                          |altgr |  | ctrl |
'------'  '-----'--------------------------'------'  '------'
      Nieminen Mika	

Table of Contents




Introduction



After work one day, I noticed some discomfort in my right wrist. Upon research, it appeared to be a mild case of Repetitive Strain Injury (RSI). Initially, I thought that this would go away after a while, but after a week it became even worse. This led me to consider potential causes such as poor posture or keyboard use habits. As an enthusiast of keyboards, I experimented with ergonomic concave ortholinear split keyboards. Wait, what?...

  • Concave: Some fingers are longer than others. A concave keyboard makes it so that the keycaps meant to be pressed by the longer fingers are further down (e.g., left middle finger for e on a Qwerty layout), and keycaps meant to be pressed by shorter fingers are further up (e.g., right pinky finger for the letter p).
  • Ortholinear: The keys are arranged in a straight vertical line, unlike most conventional keyboards. The conventional keyboards still resemble the old typewriters, where the placement of the keys was optimized so that the typewriter would not jam. There is no such requirement anymore.
  • Split: The keyboard is split into two halves (left and right), allowing one to place either hand where it is most ergonomic.

After discovering ThePrimagen (I found him long ago, but I never bothered buying the same keyboard he is on) on YouTube and reading/watching a couple of reviews, I thought that as a computer professional, the equipment could be expensive anyway (laptop, adjustable desk, comfortable chair), so why not invest a bit more into the keyboard? I purchased myself the Kinesis Advantage360 Professional keyboard.

Kinesis review



For an in-depth review, have a look at this great article:

Review of the Kinesis Advantage360 Professional keyboard

Top build quality



Overall, the keyboard feels excellent quality and robust. It has got some weight to it. Because of that, it is not ideally suited for travel, though. But I have a different keyboard to solve this (see later in this post). Overall, I love how it is built and how it feels.

Kinesis Adv.360 Pro at home

Bluetooth connectivity



Despite encountering concerns about Bluetooth connectivity issues with the Kinesis keyboard during my research, I purchased one anyway as I intended to use it only via USB. However, I discovered that the firmware updates available afterwards had addressed these reported Bluetooth issues, and as a result, I did not experience any difficulties with the Bluetooth functionality. This positive outcome allowed me to enjoy using the keyboard also wirelessly.

Gateron Brown key switches



Many voices on the internet seem to dislike the Gateron Brown switches, the only official choice for non-clicky tactile switches in the Kinesis, so I was also a bit concerned. I almost went with Cherry MX Browns for my Kinesis (a custom build from a 3rd party provider that is partnershipping with Kinesis). Still, I decided on Gateron Browns to try different switches than the Cherry MX Browns I already have on my ZSA Moonlander keyboard (another ortho-linear split keyboard, but without a concave keycap layout).

At first, I was disappointed by the Gaterons, as they initially felt a bit meshy compared to the Cherries. Still, over the weeks I grew to prefer them because of their smoothness. Over time, the tactile bumps also became more noticeable (as my perception of them improved). Because of their less pronounced tactile feedback, the Gaterons are less tiring for long typing sessions and better suited for a relaxed typing experience.

So, the Cherry MX feel sharper but are more tiring in the long run, and the Gaterons are easier to write on and the tactile Feedback is slightly less pronounced.

Keycaps



If you ever purchase a Kinesis keyboard, go with the PCB keycaps. They upgrade the typing experience a lot. The only thing you will lose is that the backlighting won't shine through them. But that is a reasonable tradeoff. When do I need backlighting? I am supposed to look at the screen and not the keyboard while typing.

I went with the blank keycaps, by the way.

Kinesis Adv.360 Pro at home

Keymap editor



There is no official keymap editor. You have to edit a configuration file manually, build the firmware from scratch, and upload the firmware with the new keymap to both keyboard halves. The Professional version of his keyboard, by the way, runs on the ZMK open-source firmware.

Many users find the need for an easy-to-use keymap editor an issue. But this is the Pro model. You can also go with the non-Pro, which runs on non-open-source firmware and has no Bluetooth (it must be operated entirely on USB).

There is a 3rd party solution which is supposed to configure the keymap for the Professional model as bliss, but I have never used it. As a part-time programmer and full-time Site Reliability Engineer, I am okay configuring the keymap in my text editor and building it in a local docker container. This is one of the standard ways of doing it here. You could also use a GitHub pipeline for the firmware build, but I prefer building it locally on my machine. This all seems natural to me, but this may be an issue for "the average Joe" user.

First steps



I didn't measure the usual words per minute (wpm) on my previous keyboard, the ZSA Moonlander, but I guess that it was around 40-50wpm. Once the Kinesis arrived, I started practising. The experience was quite different due to the concave keycaps, so I barely managed 10wpm on the first day.

I quickly noticed that I could not continue using the freestyle 6-finger typing system I was used to on my Moonlander or any previous keyboards I worked with. I learned ten-finger touch typing from scratch to be more efficient with the Kinesis keyboard. The keyboard forces you to embrace touch typing.

Sometimes, there were brain farts, and I couldn't type at all. The trick was not to freak out about it, but to move on. If your average goes down a bit for a day, it doesn't matter; the long-term trend over several days and weeks matters, not the one-off wpm high score.

Although my wrist pain seemed to go away aftre the first week of using the Kinesis, my fingers became tired of adjusting to the new way of typing. My hands were stiff, as if I had been training for the Olympics. Only after three weeks did I start to feel comfortable with it. If it weren't for the comments I read online, I would have sent it back after week 2.

I also had a problem with the left pinky finger, where I could not comfortably reach the p key. This involved moving the whole hand. An easy fix was to swap p with ; on the keyboard layout.

Considering alternate layouts



As I was going to learn 10-finger touch typing from scratch, I also played with the thought of switching from the Qwerty to the Dvorak or Colemak keymap, but after reading some comments on the internet, I decided against it:

  • These layouts (Dvorak and Colemak) will minimize the finger travel for the most commonly used English words, but they necessarily don't give you a better wpm score.
  • One comment on Redit also mentioned that getting stiffer fingers with these layouts is more likely than with Qwerty, as in Qwerty, he had to stretch out his fingers more often, which helps here.
  • There are also many applications and websites with keyboard shortcuts and are Qwerty-optimized.
  • You won't be able to use someone else's computer as there will be likely Qwerty. Some report that after using an alternative layout for a while, they forget how to use Qwerty.

Training how to type



Tools



One of the most influential tools in my touch typing journey has been keybr.com. This site/app helped me learn 10-finger touch typing, and I practice daily for 30 minutes (in the first two weeks, up to an hour every day). The key is persistence and focus on technique rather than speed; the latter naturally improves with regular practice. Precision matters, too, so I always correct my errors using the backspace key.

https://keybr.com

I also used a command-line tool called tt, which is written in Go. It has a feature that I found very helpful: the ability to practice typing by piping custom text into it. Additionally, I appreciated its customization options, such as choosing a colour theme and specifying how statistics are displayed.

https://github.com/lemnos/tt

I wrote myself a small Ruby script that would randomly select a paragraph from one of my eBooks or book notes and pipe it to tt. This helped me remember some of the books I read and also practice touch typing.

My keybr.com statistics



Overall, I trained for around 4 months in more than 5,000 sessions. My top speed in a session was 127.1wpm (up from barely 10wpm at the beginning).

All time stats

My overall average speed over those 5,000 sessions was 80wpm. The average speed over the last week was over 100wpm. The green line represents the wpm average (increasing trend), the purple line represents the number of keys in the practices (not much movement there, as all keys are unlocked), and the red line represents the average typing accuracy.

Typing speed over leson

Around the middle, you see a break-in of the wpm average value. This was where I swapped the p and ; keys, but after some retraining, I came back to the previous level and beyond.

Tips and tricks



These are some tips and tricks I learned along the way to improve my typing speed:

Relax



It's easy to get cramped when trying to hit this new wpm mark, but this is just holding you back. Relax and type at a natural pace. Now I also understand why my Katate Sensei back in London kept screaming "RELAAAX" at me during practice.... It didn't help much back then, though, as it is difficult to relax while someone screams at you!

Focus on accuracy first



This goes with the previous point. Instead of trying to speed through sessions as quickly as possible, slow down and try to type the words correctly—so don't rush it. If you aren't fast yet, the reason is that your brain hasn't trained enough. It will come over time, and you will be faster.

Chording



A trick to getting faster is to type by word and pause between each word so you learn the words by chords. From 80wpm and beyond, this makes a real difference.

Punctuation and Capitalization



I included 10% punctuation and 20% capital letters in my keybr.com practice sessions to simulate real typing conditions, which improved my overall working efficiency. I guess I would have gone to 120wpm in average if I didn't include this options...

Reverse shifting



Reverse shifting aka left-right shifting is to...

  • ...use the left shift key for letters on the right keyboard side.
  • ...use the right shift key for letters on the left keyboard side.

This makes using the shift key a blaze.

Enter the flow state



Listening to music helps me enter a flow state during practice sessions, which makes typing training a bit addictive (which is good, or isn't it?).

Repeat every word



There's a setting on keybr.com that makes it so that every word is always repeated, having you type every word twice in a row. I liked this feature very much, and I think it also helped to improve my practice.

Don't use the same finger for two consecutive keystrokes



Apparently, if you want to type fast, avoid using the same finger for two consecutive keystrokes. This means you don't always need to use the same finger for the same keys.
However, there are no hard and fast rules. Thus, everyone develops their system for typing word combinations. An exception would be if you are typing the very same letter in a row (e.g., t in letter)—here, you are using the same finger for both ts.

Warm-up



You can't reach your average typing speed first ting the morning. It would help if you warmed up before the exercise or practice later during the day. Also, some days are good, others not so, e.g., after a bad night's sleep. What matters is the mid- and long-term trend, not the fluctuations here, though.

Travel keyboard



As mentioned, the Kinesis is a great keyboard, but it is not meant for travel.

I guess keyboards will always be my expensive hobby, so I also purchased another ergonomic, ortho-linear, concave split keyboard, the Glove80 (with the Red Pro low-profile switches). This keyboard is much lighter and, in my opinion, much better suited for travel than the Kinesis. It also comes with a great travel case.

Here is a photo of me using it with my Surface Go 2 (it runs Linux, by the way) while waiting for the baggage drop at the airport:

Traveling with the Glove80 using my Surface Go 2

For everyday work, I prefer the tactile Browns on the Kinesis over the Red Pro I have on the Glove80 (normal profile vs. low profile). The Kinesis feels much more premium, whereas the Glove80 is much lighter and easier to store away in a rucksack (the official travel case is a bit bulky, so I wrapped it simply in bubble plastic).

The F-key row is odd at the Glove80. I would have preferred more keys on the sides like the Kinesis, and I use them for [] {} (), which is pretty handy there. However, I like the thumb cluster of the Glove80 more than the one on the Kinesis.

The good thing is that I can switch between both keyboards instantly without retraining my typing memories. I've configured (as much as possible) the same keymaps on both my Kinesis and Glove80, making it easy to switch between them at any occasion.

Interested in the Glove80? I suggest also reading this review:

Review of the Glove80 keyboard

Upcoming custom Kinesis build



As I mentioned, keyboards will remain an expensive hobby of mine. I don't regret anything here, though. After all, I use keyboards at my day job. I've ordered a Kinesis custom build with the Gateron Kangaroo switches, and I'm excited to see how that compares to my current setup. I'm still deciding whether to keep my Gateron Brown-equipped Kinesis as a secondary keyboard or possibly leave it at my in-laws for use when visiting or to sell it.

Update 2025-02-22: I've received my custom Kinesis Adv. 360 build with the Gateron Baby Kangaroo key switches. I am absolutely in love! I will keep my Gateron Brown versin around, though.

Conclusion



When I traveled with the Glove80 for work to the London office, a colleague stared at my keyboard and made jokes that it might be broken (split into two halves). But other than that...

Ten-finger touch typing has improved my efficiency and has become a rewarding discipline. Whether it's the keyboards I use, the tools I practice with, or the techniques I've adopted, each step has been a learning experience. I hope sharing my journey provides valuable insights and inspiration for anyone looking to improve their touch typing skills.

I also accidentally started using a 10-finger-like system (maybe still 6 fingers, but better than before) on my regular laptop keyboard. I could be more efficient on the laptop keyboard. The form is different there (not ortholinear, not concave keycaps, etc.), but my typing has improved there too (even if it is only by a little bit).

I don't want to return to a non-concave keyboard as my default. I will use other keyboards still once in a while but only for short periods or when I have to (e.g. travelling with my Laptop and when there is no space to put an external keyboard)

Learning to touch type has been an eye-opening experience for me, not just for work but also for personal projects. Now, writing documentation is so much fun; who could believe that? Furthermore, working with Slack (communicating with colleagues) is more fun now as well.

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
'The Stoic Challenge' book notes gemini://foo.zone/gemfeed/2024-07-07-the-stoic-challenge-book-notes.gmi 2024-07-07T12:46:55+03:00 Paul Buetow aka snonux paul@dev.buetow.org These are my personal takeaways after reading 'The Stoic Challenge: A Philosopher's Guide to Becoming Tougher, Calmer, and More Resilient' by William B. Irvine.

"The Stoic Challenge" book notes



Published at 2024-07-07T12:46:55+03:00

These are my personal takeaways after reading "The Stoic Challenge: A Philosopher's Guide to Becoming Tougher, Calmer, and More Resilient" by William B. Irvine.

         ,..........   ..........,
     ,..,'          '.'          ',..,
    ,' ,'            :            ', ',
   ,' ,'             :             ', ',
  ,' ,'              :              ', ',
 ,' ,'............., : ,.............', ',
,'  '............   '.'   ............'  ',
 '''''''''''''''''';''';''''''''''''''''''
                    '''

Table of Contents




God sets you up for a challenge



Gods set you up for a challenge to see how resilient you are. Is getting angry worth the price? If you stay calm then you can find the optimal workaround for the obstacle. Stay calm even with big setbacks. Practice minimalism of negative emotions.

Put a positive spin on everything. What should you do if someone wrong you? Don't get angry, there is no point in that, it just makes you suffer. Do the best what you got now and keep calm and carry on. A resilient person will refuse to play the role of a victim. You can develop the setback response skills. Turn a setback. e.g. a handycap, into a personal triumph.

It is not the things done to you or happen to you what matters but how you take the things and react to these things.

Don't row against the other boats but against your own lazy bill. It doesn't matter if you are first or last, as long as you defeat your lazy self.

Stoics are thankful that they are mortal. As then you can get reminded of how great it is to be alive at all. In dying we are more alive we have ever been as every thing you do could be the last time you do it. Rather than fighting your death you should embrace it if there are no workarounds. Embrace a good death.

Negative visualization



It is easy what we have to take for granted.

  • Imagine the negative and then think that things are actually much better than they seem to be.
  • Close your eyes and imagine you are color blind for a minute, then open the eyes again and see all the colours. You will be grateful for being able to see the colours.
  • Now close your eyes for a minute and imagine you would be blind, so that you will never be able to experience the world again and let it sink in. When you open your eyes again you will feel a lot of gratefulness.
  • Last time meditation. Lets you appreciate the life as it is now. Life gets vitalised again.

Oh, nice trick, you stoic "god"! ;-)



Take setbacks as a challenge. Also take it with some humor.

  • A setback in a setback, how Genius :-)
  • A setback in a setback in a setback: the stoic god's work overtime, eh? :-)

What would the stoic god's do next? This is just a test strategy by them. Don't be frustrated at all but be astonished of what comes next. Thank the stoic gods of testing you. This is comfort zone extension of the stoics aka toughness Training.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2025-06-07 "A Monk's Guide to Happiness" book notes
2025-04-19 "When: The Scientific Secrets of Perfect Timing" book notes
2024-10-24 "Staff Engineer" book notes
2024-07-07 "The Stoic Challenge" book notes (You are currently reading this)
2024-05-01 "Slow Productivity" book notes
2023-11-11 "Mind Management" book notes
2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes
2023-05-06 "The Obstacle is the Way" book notes
2023-04-01 "Never split the difference" book notes
2023-03-16 "The Pragmatic Programmer" book notes

Back to the main site
Random Weird Things - Part Ⅰ gemini://foo.zone/gemfeed/2024-07-05-random-weird-things.gmi 2024-07-05T10:59:59+03:00 Paul Buetow aka snonux paul@dev.buetow.org Every so often, I come across random, weird, and unexpected things on the internet. I thought it would be neat to share them here from time to time. As a start, here are ten of them.

Random Weird Things - Part Ⅰ



Published at 2024-07-05T10:59:59+03:00; Updated at 2025-02-08

Every so often, I come across random, weird, and unexpected things on the internet. I thought it would be neat to share them here from time to time. As a start, here are ten of them.

2024-07-05 Random Weird Things - Part Ⅰ (You are currently reading this)
2025-02-08 Random Weird Things - Part Ⅱ

		       /\_/\
WHOA!! 	     ( o.o )
		       > ^ <
		      /  -  \
		    /        \
		   /______\  \

Table of Contents




1. bad.horse traceroute



Run traceroute to get the poem (or song).

Update: A reader hinted that by specifying -n 60, there will be even more output!

❯ traceroute -m 60 bad.horse
traceroute to bad.horse (162.252.205.157), 60 hops max, 60 byte packets
 1  _gateway (192.168.1.1)  5.237 ms  5.264 ms  6.009 ms
 2  77-85-0-2.ip.btc-net.bg (77.85.0.2)  8.753 ms  7.112 ms  8.336 ms
 3  212-39-69-103.ip.btc-net.bg (212.39.69.103)  9.434 ms  9.268 ms  9.986 ms
 4  * * *
 5  xe-1-2-0.mpr1.fra4.de.above.net (80.81.194.26)  39.812 ms  39.030 ms  39.772 ms
 6  * ae12.cs1.fra6.de.eth.zayo.com (64.125.26.172)  123.576 ms *
 7  * * *
 8  * * *
 9  ae10.cr1.lhr15.uk.eth.zayo.com (64.125.29.17)  119.097 ms  119.478 ms  120.767 ms
10  ae2.cr1.lhr11.uk.zip.zayo.com (64.125.24.140)  120.398 ms  121.147 ms  120.948 ms
11  * * *
12  ae25.mpr1.yyz1.ca.zip.zayo.com (64.125.23.117)  145.072 ms *  181.773 ms
13  ae5.mpr1.tor3.ca.zip.zayo.com (64.125.23.118)  168.239 ms  168.158 ms  168.137 ms
14  64.124.217.237.IDIA-265104-ZYO.zip.zayo.com (64.124.217.237)  168.026 ms  167.999 ms  165.451 ms
15  * * *
16  t00.toroc1.on.ca.sn11.net (162.252.204.2)  131.598 ms  131.308 ms  131.482 ms
17  bad.horse (162.252.205.130)  131.430 ms  145.914 ms  130.514 ms
18  bad.horse (162.252.205.131)  136.634 ms  145.295 ms  135.631 ms
19  bad.horse (162.252.205.132)  139.158 ms  148.363 ms  138.934 ms
20  bad.horse (162.252.205.133)  145.395 ms  148.054 ms  147.140 ms
21  he.rides.across.the.nation (162.252.205.134)  149.687 ms  147.731 ms  150.135 ms
22  the.thoroughbred.of.sin (162.252.205.135)  156.644 ms  155.155 ms  156.447 ms
23  he.got.the.application (162.252.205.136)  161.187 ms  162.318 ms  162.674 ms
24  that.you.just.sent.in (162.252.205.137)  166.763 ms  166.675 ms  164.243 ms
25  it.needs.evaluation (162.252.205.138)  172.073 ms  171.919 ms  171.390 ms
26  so.let.the.games.begin (162.252.205.139)  175.386 ms  174.180 ms  175.965 ms
27  a.heinous.crime (162.252.205.140)  180.857 ms  180.766 ms  180.192 ms
28  a.show.of.force (162.252.205.141)  187.942 ms  186.669 ms  186.986 ms
29  a.murder.would.be.nice.of.course (162.252.205.142)  191.349 ms  191.939 ms  190.740 ms
30  bad.horse (162.252.205.143)  195.425 ms  195.716 ms  196.186 ms
31  bad.horse (162.252.205.144)  199.238 ms  200.620 ms  200.318 ms
32  bad.horse (162.252.205.145)  207.554 ms  206.729 ms  205.201 ms
33  he-s.bad (162.252.205.146)  211.087 ms  211.649 ms  211.712 ms
34  the.evil.league.of.evil (162.252.205.147)  212.657 ms  216.777 ms  216.589 ms
35  is.watching.so.beware (162.252.205.148)  220.911 ms  220.326 ms  221.961 ms
36  the.grade.that.you.receive (162.252.205.149)  225.384 ms  225.696 ms  225.640 ms
37  will.be.your.last.we.swear (162.252.205.150)  232.312 ms  230.989 ms  230.919 ms
38  so.make.the.bad.horse.gleeful (162.252.205.151)  235.761 ms  235.291 ms  235.585 ms
39  or.he-ll.make.you.his.mare (162.252.205.152)  241.350 ms  239.407 ms  238.394 ms
40  o_o (162.252.205.153)  246.154 ms  247.650 ms  247.110 ms
41  you-re.saddled.up (162.252.205.154)  250.925 ms  250.401 ms  250.619 ms
42  there-s.no.recourse (162.252.205.155)  256.071 ms  251.154 ms  255.340 ms
43  it-s.hi-ho.silver (162.252.205.156)  260.152 ms  261.775 ms  261.544 ms
44  signed.bad.horse (162.252.205.157)  262.430 ms  261.410 ms  261.365 ms

2. ASCII cinema



Fancy watching Star Wars Episode IV in ASCII? Head to the ASCII cinema:

https://asciinema.org/a/569727

3. Netflix's Hello World application



Netflix has got the Hello World application run in production 😱

  • https://www.Netflix.com/helloworld

By the time this is posted, it seems that Netflix has taken it offline... I should have created a screenshot!

C programming



4. Indexing an array



In C, you can index an array like this: array[i] (not surprising). But this works as well and is valid C code: i[array], 🤯 It's because after the spec A[B] is equivalent to *(A + B) and the ordering doesn't matter for the + operator. All 3 loops are producing the same output. Would be funny to use i[array] in a merge request of some code base on April Fool's day!

#include <stdio.h>

int main(void) {
  int array[5] = { 1, 2, 3, 4, 5 };

  for (int i = 0; i < 5; i++)
    printf("%d\n", array[i]);

  for (int i = 0; i < 5; i++)
    printf("%d\n", i[array]);

  for (int i = 0; i < 5; i++)
    printf("%d\n", *(i + array));
}

5. Variables with prefix $



In C you can prefix variables with $! E.g. the following is valid C code 🫠:

#include <stdio.h>

int main(void) {
  int $array[5] = { 1, 2, 3, 4, 5 };

  for (int $i = 0; $i < 5; $i++)
    printf("%d\n", $array[$i]);

  for (int $i = 0; $i < 5; $i++)
    printf("%d\n", $i[$array]);

  for (int $i = 0; $i < 5; $i++)
    printf("%d\n", *($i + $array));
}

6. Object oriented shell scripts using ksh



Experienced software developers are aware that scripting languages like Python, Perl, Ruby, and JavaScript support object-oriented programming (OOP) concepts such as classes and inheritance. However, many might be surprised to learn that the latest version of the Korn shell (Version 93t+) also supports OOP. In ksh93, OOP is implemented using user-defined types:

#!/usr/bin/ksh93
 
typeset -T Point_t=(
    integer -h 'x coordinate' x=0
    integer -h 'y coordinate' y=0
    typeset -h 'point color'  color="red"

    function getcolor {
        print -r ${_.color}
    }

    function setcolor {
        _.color=$1
    }

    setxy() {
        _.x=$1; _.y=$2
    }

    getxy() {
        print -r "(${_.x},${_.y})"
    }
)
 
Point_t point
 
echo "Initial coordinates are (${point.x},${point.y}). Color is ${point.color}"
 
point.setxy 5 6
point.setcolor blue
 
echo "New coordinates are ${point.getxy}. Color is ${point.getcolor}"
 
exit 0

Using types to create object oriented Korn shell 93 scripts

7. This works in Go



There is no pointer arithmetic in Go like in C, but it is still possible to do some brain teasers with pointers 😧:

package main

import "fmt"

func main() {
	var i int
	f := func() *int {
		return &i
	}
	*f()++
	fmt.Println(i)
}

Go playground

8. "I am a Teapot" HTTP response code



Defined in 1998 as one of the IETF's traditional April Fools' jokes (RFC 2324), the Hyper Text Coffee Pot Control Protocol specifies an HTTP status code that is not intended for actual HTTP server implementation. According to the RFC, this code should be returned by teapots when asked to brew coffee. This status code also serves as an Easter egg on some websites, such as Google.com's "I'm a teapot" feature. Occasionally, it is used to respond to a blocked request, even though the more appropriate response would be the 403 Forbidden status code.

https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#418

9. jq is a functional programming language



Many know of jq, the handy small tool and swiss army knife for JSON parsing.

https://github.com/jqlang/jq

What many don't know that jq is actually a full blown functional programming language jqlang, have a look at the language description:

https://github.com/jqlang/jq/wiki/jq-Language-Description

As a matter of fact, the language is so powerful, that there exists an implementation of jq in jq itself:

https://github.com/wader/jqjq

Here some snipped from jqjq, to get a feel of jqlang:

def _token:
	def _re($re; f):
	  ( . as {$remain, $string_stack}
	  | $remain
	  | match($re; "m").string
	  | f as $token
	  | { result: ($token | del(.string_stack))
	    , remain: $remain[length:]
	    , string_stack:
	        ( if $token.string_stack == null then $string_stack
	          else $token.string_stack
	          end
	        )
	    }
	  );
	if .remain == "" then empty
	else
	  ( . as {$string_stack}
	  | _re("^\\s+"; {whitespace: .})
	  // _re("^#[^\n]*"; {comment: .})
	  // _re("^\\.[_a-zA-Z][_a-zA-Z0-9]*"; {index: .[1:]})
	  // _re("^[_a-zA-Z][_a-zA-Z0-9]*"; {ident: .})
	  // _re("^@[_a-zA-Z][_a-zA-Z0-9]*"; {at_ident: .})
	  // _re("^\\$[_a-zA-Z][_a-zA-Z0-9]*"; {binding: .})
	  # 1.23, .123, 123e2, 1.23e2, 123E2, 1.23e+2, 1.23E-2 or 123
	  // _re("^(?:[0-9]*\\.[0-9]+|[0-9]+)(?:[eE][-\\+]?[0-9]+)?"; {number: .})
	  // _re("^\"(?:[^\"\\\\]|\\\\.)*?\\\\\\(";
	      ( .[1:-2]
	      | _unescape
	      | {string_start: ., string_stack: ($string_stack+["\\("])}
	      )
	    )
	 .
	 .
	 .

10. Regular expression to verify email addresses



This is a pretty old meme, but still worth posting here (as some may be unaware). The RFC822 Perl regex to validate email addresses is 😱:

(?:(?:\r\n)?[ \t])*(?:(?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t]
)+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:
\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(
?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ 
\t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\0
31]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\
>(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+
(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:
(?:\r\n)?[ \t])*))*|(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z
|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)
?[ \t])*)*\<(?:(?:\r\n)?[ \t])*(?:@(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\
r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[
 \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)
?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t]
)*))*(?:,@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[
 \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*
)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t]
)+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*)
*:(?:(?:\r\n)?[ \t])*)?(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+
|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r
\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:
\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t
>))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031
>+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](
?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?
:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?
:\r\n)?[ \t])*))*\>(?:(?:\r\n)?[ \t])*)|(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?
:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?
[ \t]))*"(?:(?:\r\n)?[ \t])*)*:(?:(?:\r\n)?[ \t])*(?:(?:(?:[^()<>@,;:\\".\[\] 
\000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|
\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>
@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"
(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t]
)*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\
".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?
:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[
\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*|(?:[^()<>@,;:\\".\[\] \000-
\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(
?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)*\<(?:(?:\r\n)?[ \t])*(?:@(?:[^()<>@,;
:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([
^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\"
.\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\
>\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*(?:,@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\
[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\
r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] 
\000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]
|\\.)*\](?:(?:\r\n)?[ \t])*))*)*:(?:(?:\r\n)?[ \t])*)?(?:[^()<>@,;:\\".\[\] \0
00-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\
.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,
;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?
:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*
(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".
\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[
^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]
>))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*\>(?:(?:\r\n)?[ \t])*)(?:,\s*(
?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\
".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(
?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[
\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t
>)*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t
>)+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?
:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|
\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*|(?:
[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\
>]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)*\<(?:(?:\r\n)
?[ \t])*(?:@(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["
()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)
?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>
@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*(?:,@(?:(?:\r\n)?[
 \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,
;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t]
)*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\
".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*)*:(?:(?:\r\n)?[ \t])*)?
(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".
\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:
\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\[
"()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])
*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])
+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\
.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z
|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*\>(?:(
?:\r\n)?[ \t])*))*)?;\s*)

https://pdw.ex-parrot.com/Mail-RFC822-Address.html

I hope you had some fun. E-Mail your comments to paul@nospam.buetow.org :-)

other related posts are:

Back to the main site
Terminal multiplexing with `tmux` - Z-Shell edition gemini://foo.zone/gemfeed/2024-06-23-terminal-multiplexing-with-tmux.gmi 2024-06-23T22:41:59+03:00 Paul Buetow aka snonux paul@dev.buetow.org This is the Z-Shell version. There is also a Fish version:

Terminal multiplexing with tmux - Z-Shell edition



Published at 2024-06-23T22:41:59+03:00; Last updated 2025-05-02

This is the Z-Shell version. There is also a Fish version:

./2025-05-02-terminal-multiplexing-with-tmux-fish-edition.html

Tmux (Terminal Multiplexer) is a powerful, terminal-based tool that manages multiple terminal sessions within a single window. Here are some of its primary features and functionalities:

  • Session management
  • Window and Pane management
  • Persistent Workspace
  • Customization

https://github.com/tmux/tmux/wiki

         _______
        |.-----.|
        || Tmux||
        ||_.-._||
        `--)-(--`
       __[=== o]___
      |:::::::::::|\
jgs   `-=========-`()
    mod. by Paul B.

Table of Contents




Before continuing...



Before continuing to read this post, I encourage you to get familiar with Tmux first (unless you already know the basics). You can go through the official getting started guide:

https://github.com/tmux/tmux/wiki/Getting-Started

I can also recommend this book (this is the book I got started with with Tmux):

https://pragprog.com/titles/bhtmux2/tmux-2/

Over the years, I have built a couple of shell helper functions to optimize my workflows. Tmux is extensively integrated into my daily workflows (personal and work). I had colleagues asking me about my Tmux config and helper scripts for Tmux several times. It would be neat to blog about it so that everyone interested in it can make a copy of my configuration and scripts.

The configuration and scripts in this blog post are only the non-work-specific parts. There are more helper scripts, which I only use for work (and aren't really useful outside of work due to the way servers and clusters are structured there).

Tmux is highly configurable, and I think I am only scratching the surface of what is possible with it. Nevertheless, it may still be useful for you. I also love that Tmux is part of the OpenBSD base system!

Shell aliases



I am a user of the Z-Shell (zsh), but I believe all the snippets mentioned in this blog post also work with Bash.

https://www.zsh.org

For the most common Tmux commands I use, I have created the following shell aliases:

alias tm=tmux
alias tl='tmux list-sessions'
alias tn=tmux::new
alias ta=tmux::attach
alias tx=tmux::remote
alias ts=tmux::search
alias tssh=tmux::cluster_ssh

Note all tmux::...; those are custom shell functions doing certain things, and they aren't part of the Tmux distribution. But let's run through every aliases one by one.

The first two are pretty straightforward. tm is simply a shorthand for tmux, so I have to type less, and tl lists all Tmux sessions that are currently open. No magic here.

The tn alias - Creating a new session



The tn alias is referencing this function:

# Create new session and if alread exists attach to it
tmux::new () {
    readonly session=$1
    local date=date
    if where gdate &>/dev/null; then
        date=gdate
    fi

    tmux::cleanup_default
    if [ -z "$session" ]; then
        tmux::new T$($date +%s)
    else
        tmux new-session -d -s $session
        tmux -2 attach-session -t $session || tmux -2 switch-client -t $session
    fi
}
alias tn=tmux::new

There is a lot going on here. Let's have a detailed look at what it is doing. As a note, the function relies on GNU Date, so MacOS is looking for the gdate commands to be available. Otherwise, it will fall back to date. You need to install GNU Date for Mac, as it isn't installed by default there. As I use Fedora Linux on my personal Laptop and a MacBook for work, I have to make it work for both.

First, a Tmux session name can be passed to the function as a first argument. That session name is only optional. Without it, Tmux will select a session named T$($date +%s) as a default. Which is T followed by the UNIX epoch, e.g. T1717133796.

Cleaning up default sessions automatically



Note also the call to tmux::cleanup_default; it would clean up all already opened default sessions if they aren't attached. Those sessions were only temporary, and I had too many flying around after a while. So, I decided to auto-delete the sessions if they weren't attached. If I want to keep sessions around, I will rename them with the Tmux command prefix-key $. This is the cleanup function:

tmux::cleanup_default () {
    local s
    tmux list-sessions | grep '^T.*: ' | grep -F -v attached |
    cut -d: -f1 | while read -r s; do
        echo "Killing $s"
        tmux kill-session -t "$s"
    done
}

The cleanup function kills all open Tmux sessions that haven't been renamed properly yet—but only if they aren't attached (e.g., don't run in the foreground in any terminal). Cleaning them up automatically keeps my Tmux sessions as neat and tidy as possible.

Renaming sessions



Whenever I am in a temporary session (named T....), I may decide that I want to keep this session around. I have to rename the session to prevent the cleanup function from doing its thing. That's, as mentioned already, easily accomplished with the standard prefix-key $ Tmux command.

The ta alias - Attaching to a session



This alias refers to the following function, which tries to attach to an already-running Tmux session.

tmux::attach () {
    readonly session=$1

    if [ -z "$session" ]; then
        tmux attach-session || tmux::new
    else
        tmux attach-session -t $session || tmux::new $session
    fi
}
alias ta=tmux::attach

If no session is specified (as the argument of the function), it will try to attach to the first open session. If no Tmux server is running, it will create a new one with tmux::new. Otherwise, with a session name given as the argument, it will attach to it. If unsuccessful (e.g., the session doesn't exist), it will be created and attached to.

The tr alias - For a nested remote session



This SSHs into the remote server specified and then, remotely on the server itself, starts a nested Tmux session. So we have one Tmux session on the local computer and, inside of it, an SSH connection to a remote server with a Tmux session running again. The benefit of this is that, in case my network connection breaks down, the next time I connect, I can continue my work on the remote server exactly where I left off. The session name is the name of the server being SSHed into. If a session like this already exists, it simply attaches to it.

tmux::remote () {
    readonly server=$1
    tmux new -s $server "ssh -t $server 'tmux attach-session || tmux'" || \
        tmux attach-session -d -t $server
}
alias tr=tmux::remote

Change of the Tmux prefix for better nesting



To make nested Tmux sessions work smoothly, one must change the Tmux prefix key locally or remotely. By default, the Tmux prefix key is Ctrl-b, so Ctrl-b $, for example, renames the current session. To change the prefix key from the standard Ctrl-b to, for example, Ctrl-g, you must add this to the tmux.conf:

set-option -g prefix C-g

This way, when I want to rename the remote Tmux session, I have to use Ctrl-g $, and when I want to rename the local Tmux session, I still have to use Ctrl-b $. In my case, I have this deployed to all remote servers through a configuration management system (out of scope for this blog post).

There might also be another way around this (without reconfiguring the prefix key), but that is cumbersome to use, as far as I remember.

The ts alias - Searching sessions with fuzzy finder



Despite the fact that with tmux::cleanup_default, I don't leave a huge mess with trillions of Tmux sessions flying around all the time, at times, it can become challenging to find exactly the session I am currently interested in. After a busy workday, I often end up with around twenty sessions on my laptop. This is where fuzzy searching for session names comes in handy, as I often don't remember the exact session names.

tmux::search () {
    local -r session=$(tmux list-sessions | fzf | cut -d: -f1)
    if [ -z "$TMUX" ]; then
        tmux attach-session -t $session
    else
        tmux switch -t $session
    fi
}
alias ts=tmux::search

All it does is list all currently open sessions in fzf, where one of them can be searched and selected through fuzzy find, and then either switch (if already inside a session) to the other session or attach to the other session (if not yet in Tmux).

You must install the fzf command on your computer for this to work. This is how it looks like:

Tmux session fuzzy finder

The tssh alias - Cluster SSH replacement



Before I used Tmux, I was a heavy user of ClusterSSH, which allowed me to log in to multiple servers at once in a single terminal window and type and run commands on all of them in parallel.

https://github.com/duncs/clusterssh

However, since I started using Tmux, I retired ClusterSSH, as it came with the benefit that Tmux only needs to be run in the terminal, whereas ClusterSSH spawned terminal windows, which aren't easily portable (e.g., from a Linux desktop to macOS). The tmux::cluster_ssh function can have N arguments, where:

  • ...the first argument will be the session name (see tmux::tssh_from_argument helper function), and all remaining arguments will be server hostnames/FQDNs to connect to simultaneously.
  • ...or, the first argument is a file name, and the file contains a list of hostnames/FQDNs (see tmux::ssh_from_file helper function)

This is the function definition behind the tssh alias:

tmux::cluster_ssh () {
    if [ -f "$1" ]; then
        tmux::tssh_from_file $1
        return
    fi

    tmux::tssh_from_argument $@
}
alias tssh=tmux::cluster_ssh

This function is just a wrapper around the more complex tmux::tssh_from_file and tmux::tssh_from_argument functions, as you have learned already. Most of the magic happens there.

The tmux::tssh_from_argument helper



This is the most magic helper function we will cover in this post. It looks like this:

tmux::tssh_from_argument () {
    local -r session=$1; shift
    local first_server=$1; shift

    tmux new-session -d -s $session "ssh -t $first_server"
    if ! tmux list-session | grep "^$session:"; then
        echo "Could not create session $session"
        return 2
    fi

    for server in "${@[@]}"; do
        tmux split-window -t $session "tmux select-layout tiled; ssh -t $server"
    done

    tmux setw -t $session synchronize-panes on
    tmux -2 attach-session -t $session | tmux -2 switch-client -t $session
}

It expects at least two arguments. The first argument is the session name to create for the clustered SSH session. All other arguments are server hostnames or FQDNs to which to connect. The first one is used to make the initial session. All remaining ones are added to that session with tmux split-window -t $session.... At the end, we enable synchronized panes by default, so whenever you type, the commands will be sent to every SSH connection, thus allowing the neat ClusterSSH feature to run commands on multiple servers simultaneously. Once done, we attach (or switch, if already in Tmux) to it.

Sometimes, I don't want the synchronized panes behavior and want to switch it off temporarily. I can do that with prefix-key p and prefix-key P after adding the following to my local tmux.conf:

bind-key p setw synchronize-panes off
bind-key P setw synchronize-panes on

The tmux::tssh_from_file helper



This one sets the session name to the file name and then reads a list of servers from that file, passing the list of servers to tmux::tssh_from_argument as the arguments. So, this is a neat little wrapper that also enables me to open clustered SSH sessions from an input file.

tmux::tssh_from_file () {
    local -r serverlist=$1; shift
    local -r session=$(basename $serverlist | cut -d. -f1)

    tmux::tssh_from_argument $session $(awk '{ print $1} ' $serverlist | sed 's/.lan./.lan/g')
}

tssh examples



To open a new session named fish and log in to 4 remote hosts, run this command (Note that it is also possible to specify the remote user):

$ tssh fish blowfish.buetow.org fishfinger.buetow.org \
    fishbone.buetow.org user@octopus.buetow.org

To open a new session named manyservers, put many servers (one FQDN per line) into a file called manyservers.txt and simply run:

$ tssh manyservers.txt

Common Tmux commands I use in tssh



These are default Tmux commands that I make heavy use of in a tssh session:

  • Press prefix-key DIRECTION to switch panes. DIRECTION is by default any of the arrow keys, but I also configured Vi keybindings.
  • Press prefix-key <space> to change the pane layout (can be pressed multiple times to cycle through them).
  • Press prefix-key z to zoom in and out of the current active pane.

Copy and paste workflow



As you will see later in this blog post, I have configured a history limit of 1 million items in Tmux so that I can scroll back quite far. One main workflow of mine is to search for text in the Tmux history, select and copy it, and then switch to another window or session and paste it there (e.g., into my text editor to do something with it).

This works by pressing prefix-key [ to enter Tmux copy mode. From there, I can browse the Tmux history of the current window using either the arrow keys or vi-like navigation (see vi configuration later in this blog post) and the Pg-Dn and Pg-Up keys.

I often search the history backwards with prefix-key [ followed by a ?, which opens the Tmux history search prompt.

Once I have identified the terminal text to be copied, I enter visual select mode with v, highlight all the text to be copied (using arrow keys or Vi motions), and press y to yank it (sorry if this all sounds a bit complicated, but Vim/NeoVim users will know this, as it is pretty much how you do it there as well).

For v and y to work, the following has to be added to the Tmux configuration file:

bind-key -T copy-mode-vi 'v' send -X begin-selection
bind-key -T copy-mode-vi 'y' send -X copy-selection-and-cancel

Once the text is yanked, I switch to another Tmux window or session where, for example, a text editor is running and paste the yanked text from Tmux into the editor with prefix-key ]. Note that when pasting into a modal text editor like Vi or Helix, you would first need to enter insert mode before prefix-key ] would paste anything.

Tmux configurations



Some features I have configured directly in Tmux don't require an external shell alias to function correctly. Let's walk line by line through my local ~/.config/tmux/tmux.conf:

source ~/.config/tmux/tmux.local.conf

set-option -g allow-rename off
set-option -g history-limit 100000
set-option -g status-bg '#444444'
set-option -g status-fg '#ffa500'
set-option -s escape-time 0

There's yet to be much magic happening here. I source a tmux.local.conf, which I sometimes use to override the default configuration that comes from the configuration management system. But it is mostly just an empty file, so it doesn't throw any errors on Tmux startup when I don't use it.

I work with many terminal outputs, which I also like to search within Tmux. So, I added a large enough history-limit, enabling me to search backwards in Tmux for any output up to a million lines of text.

Besides changing some colours (personal taste), I also set escape-time to 0, which is just a workaround. Otherwise, my Helix text editor's ESC key would take ages to trigger within Tmux. I am trying to remember the gory details. You can leave it out; if everything works fine for you, leave it out.

The next lines in the configuration file are:

set-window-option -g mode-keys vi
bind-key -T copy-mode-vi 'v' send -X begin-selection
bind-key -T copy-mode-vi 'y' send -X copy-selection-and-cancel

I navigate within Tmux using Vi keybindings, so the mode-keys is set to vi. I use the Helix modal text editor, which is close enough to Vi bindings for simple navigation to feel "native" to me. (By the way, I have been a long-time Vim and NeoVim user, but I eventually switched to Helix. It's off-topic here, but it may be worth another blog post once.)

The two bind-key commands make it so that I can use v and y in copy mode, which feels more Vi-like (as already discussed earlier in this post).

The next set of lines in the configuration file are:

bind-key h select-pane -L
bind-key j select-pane -D
bind-key k select-pane -U
bind-key l select-pane -R

bind-key H resize-pane -L 5
bind-key J resize-pane -D 5
bind-key K resize-pane -U 5
bind-key L resize-pane -R 5

These allow me to use prefix-key h, prefix-key j, prefix-key k, and prefix-key l for switching panes and prefix-key H, prefix-key J, prefix-key K, and prefix-key L for resizing the panes. If you don't know Vi/Vim/NeoVim, the letters hjkl are commonly used there for left, down, up, and right, which is also the same for Helix, by the way.

The next set of lines in the configuration file are:

bind-key c new-window -c '#{pane_current_path}'
bind-key F new-window -n "session-switcher" "tmux list-sessions | fzf | cut -d: -f1 | xargs tmux switch-client -t"
bind-key T choose-tree

The first one is that any new window starts in the current directory. The second one is more interesting. I list all open sessions in the fuzzy finder. I rely heavily on this during my daily workflow to switch between various sessions depending on the task. E.g. from a remote cluster SSH session to a local code editor.

The third one, choose-tree, opens a tree view in Tmux listing all sessions and windows. This one is handy to get a better overview of what is currently running in any local Tmux session. It looks like this (it also allows me to press a hotkey to switch to a particular Tmux window):

Tmux sessiont tree view


The last remaining lines in my configuration file are:

bind-key p setw synchronize-panes off
bind-key P setw synchronize-panes on
bind-key r source-file ~/.config/tmux/tmux.conf \; display-message "tmux.conf reloaded"

We discussed synchronized panes earlier. I use it all the time in clustered SSH sessions. When enabled, all panes (remote SSH sessions) receive the same keystrokes. This is very useful when you want to run the same commands on many servers at once, such as navigating to a common directory, restarting a couple of services at once, or running tools like htop to quickly monitor system resources.

The last one reloads my Tmux configuration on the fly.

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
Projects I currently don't have time for gemini://foo.zone/gemfeed/2024-05-03-projects-i-currently-dont-have-time-for.gmi 2024-05-03T16:23:03+03:00 Paul Buetow aka snonux paul@dev.buetow.org Over the years, I have collected many ideas for my personal projects and noted them down. I am currently in the process of cleaning up all my notes and reviewing those ideas. I don’t have time for the ones listed here and won’t have any soon due to other commitments and personal projects. So, in order to 'get rid of them' from my notes folder, I decided to simply put them in this blog post so that those ideas don't get lost. Maybe I will pick up one or another idea someday in the future, but for now, they are all put on ice in favor of other personal projects or family time.

Projects I currently don't have time for



Published at 2024-05-03T16:23:03+03:00

Over the years, I have collected many ideas for my personal projects and noted them down. I am currently in the process of cleaning up all my notes and reviewing those ideas. I don’t have time for the ones listed here and won’t have any soon due to other commitments and personal projects. So, in order to "get rid of them" from my notes folder, I decided to simply put them in this blog post so that those ideas don't get lost. Maybe I will pick up one or another idea someday in the future, but for now, they are all put on ice in favor of other personal projects or family time.

Art by Laura Brown

.'`~~~~~~~~~~~`'.
(  .'11 12 1'.  )
|  :10 \    2:  |
|  :9   @-> 3:  |
|  :8       4;  |
'. '..7 6 5..' .'
 ~-------------~  ldb


Table of Contents




Hardware projects I don't have time for



I use Arch, btw!



The idea was to build the ultimate Arch Linux setup on an old ThinkPad X200 booting with the open-source LibreBoot firmware, complete with a tiling window manager, dmenu, and all the elite tools. This is mainly for fun, as I am pretty happy (and productive) with my Fedora Linux setup. I ran EndeavourOS (close enough to Arch) on an old ThinkPad for a while, but then I switched back to Fedora because the rolling releases were annoying (there were too many updates).

OpenBSD home router



In my student days, I operated a 486DX PC with OpenBSD as my home DSL internet router. I bought the setup from my brother back then. The router's hostname was fishbone, and it performed very well until it became too slow for larger broadband bandwidth after a few years of use.

I had the idea to revive this concept, implement fishbone2, and place it in front of my proprietary ISP router to add an extra layer of security and control in my home LAN. It would serve as the default gateway for all of my devices, including a Wi-Fi access point, would run a DNS server, Pi-hole proxy, VPN client, and DynDNS client. I would also implement high availability using OpenBSD's CARP protocol.

https://openbsdrouterguide.net
https://pi-hole.net/
https://www.OpenBSD.org
https://www.OpenBSD.org/faq/pf/carp.html

However, I am putting this on hold as I have opted for an OpenWRT-based solution, which was much quicker to set up and runs well enough.

https://OpenWRT.org/

Pi-Hole server



Install Pi-hole on one of my Pis or run it in a container on Freekat. For now, I am putting this on hold as the primary use for this would be ad-blocking, and I am avoiding surfing ad-heavy sites anyway. So there's no significant use for me personally at the moment.

https://pi-hole.net/

Infodash



The idea was to implement my smart info screen using purely open-source software. It would display information such as the health status of my personal infrastructure, my current work tracker balance (I track how much I work to prevent overworking), and my sports balance (I track my workouts to stay within my quotas for general health). The information would be displayed on a small screen in my home office, on my Pine watch, or remotely from any terminal window.

I don't have this, and I haven't missed having it, so I guess it would have been nice to have it but not provide any value other than the "fun of tinkering."

Reading station



I wanted to create the most comfortable setup possible for reading digital notes, articles, and books. This would include a comfy armchair, a silent barebone PC or Raspberry Pi computer running either Linux or *BSD, and an e-Ink display mounted on a flexible arm/stand. There would also be a small table for my paper journal for occasional note-taking. There are a bunch of open-source software available for PDF and ePub reading. It would have been neat, but I am currently using the most straightforward solution: a Kobo Elipsa 2E, which I can use on my sofa.

Retro station



I had an idea to build a computer infused with retro elements. It wouldn't use actual retro hardware but would look and feel like a retro machine. I would call this machine HAL or Retron.

I would use an old ThinkPad laptop placed on a horizontal stand, running NetBSD, and attaching a keyboard from ModelFkeyboards. I use WindowMaker as a window manager and run terminal applications through Retro Term. For the monitor, I would use an older (black) EIZO model with large bezels.

https://www.NetBSD.org
https://www.modelfkeyboards.com
https://github.com/Swordfish90/cool-retro-term)

The computer would occasionally be used to surf the Gemini space, take notes, blog, or do light coding. However, I have abandoned the project for now because there isn't enough space in my apartment, as my daughter will have a room for herself.

Sound server



My idea involved using a barebone mini PC running FreeBSD with the Navidrome sound server software. I could remotely connect to it from my phone, workstation/laptop to listen to my music collection. The storage would be based on ZFS with at least two drives for redundancy. The app would run in a Linux Docker container under FreeBSD via Bhyve.

https://github.com/navidrome/navidrome
https://wiki.freebsd.org/bhyve

Project Freekat



My idea involved purchasing the Meerkat mini PC from System76 and installing FreeBSD. Like the sound-server idea (see previous idea), it would run Linux Docker through Bhyve. I would self-host a bunch of applications on it:

  • Wallabag
  • Ankidroid
  • Miniflux & Postgres
  • Audiobookshelf
  • ...

All of this would be within my LAN, but the services would also be accessible from the internet through either Wireguard or SSH reverse tunnels to one of my OpenBSD VMs, for example:

  • wallabag.awesome.buetow.org
  • ankidroid.awesome.buetow.org
  • miniflux.awesome.buetow.org
  • audiobookshelf.awesome.buetow.org
  • ...

I am abandoning this project for now, as I am currently hosting my apps on AWS ECS Fargate under *.cool.buetow.org, which is "good enough" for the time being and also offers the benefit of learning to use AWS and Terraform, knowledge that can be applied at work.

My personal AWS setup

Programming projects I don't have time for



CLI-HIVE



This was a pet project idea that my brother and I had. The concept was to collect all shell history of all servers at work in a central place, apply ML/AI, and return suggestions for commands to type or allow a fuzzy search on all the commands in the history. The recommendations for the commands on a server could be context-based (e.g., past occurrences on the same server type).

You could decide whether to share your command history with others so they would receive better suggestions depending on which server they are on, or you could keep all the history private and secure. The plan was to add hooks into zsh and bash shells so that all commands typed would be pushed to the central location for data mining.

Enhanced KISS home photo albums



I don't use third-party cloud providers such as Google Photos to store/archive my photos. Instead, they are all on a ZFS volume on my home NAS, with regular offsite backups taken. Thus, my project would involve implementing the features I miss most or finding a solution simple enough to host on my LAN:

  • A feature I miss presents me with a random day from the past and some photos from that day. This project would randomly select a day and generate a photo album for me to view and reminisce about memories.
  • Another feature I miss is the ability to automatically deduplicate all the photos, as I am sure there are tons of duplicates on my NAS.
  • Auto-enhancing the photos (perhaps using ImageMagick?)
  • I already have a simple photoalbum.sh script that generates an album based on an input directory. However, it would be great also to have a timeline feature to enable browsing through different dates.

KISS static web photo albums with photoalbum.sh

KISS file sync server with end-to-end encryption



I aimed to have a simple server to which I could sync notes and other documents, ensuring that the data is fully end-to-end encrypted. This way, only the clients could decrypt the data, while an encrypted copy of all the data would be stored on the server side. There are a few solutions (e.g., NextCloud), but they are bloated or complex to set up.

I currently use Syncthing for encrypted file sync across all my devices; however, the data is not end-to-end encrypted. It's a good-enough setup, though, as my Syncthing server is in my home LAN on an encrypted file system.

https://syncthing.net

I also had the idea of using this as a pet project for work and naming it Cryptolake, utilizing post-quantum-safe encryption algorithms and a distributed data store.

A language that compiles to bash



I had an idea to implement a higher-level language with strong typing that could be compiled into native Bash code. This would make all resulting Bash scripts more robust and secure by default. The project would involve developing a parser, lexer, and a Bash code generator. I planned to implement this in Go.

I had previously implemented a tiny scripting language called Fype (For Your Program Execution), which could have served as inspiration.

The Fype Programming Language

A language that compiles to sed



This is similar to the previous idea, but the difference is that the language would compile into a sed script. Sed has many features, but the brief syntax makes scripts challenging to read. The higher-level language would mimic sed but in a form that is easier for humans to read.

Renovate VS-Sim



VS-Sim is an open-source simulator programmed in Java for distributed systems. VS-Sim stands for "Verteilte Systeme Simulator," the German translation for "Distributed Systems Simulator." The VS-Sim project was my diploma thesis at Aachen University of Applied Sciences.

https://codeberg.org/snonux/vs-sim

The ideas I had was:

  • Translate the project into English.
  • Modernise the Java codebase to be compatible with the latest JDK.
  • Make it compile to native binaries using GraalVM.
  • Distribute the project using AppImages.

I have put this project on hold for now, as I want to do more things in Go and fewer in Java in my personal time.

KISS ticketing system



My idea was to program a KISS (Keep It Simple, Stupid) ticketing system for my personal use. However, I am abandoning this project because I now use the excellent Taskwarrior software. You can learn more about it at:

https://taskwarrior.org/

A domain-specific language (DSL) for work



At work, an internal service allocates storage space for our customers on our storage clusters. It automates many tasks, but many tweaks are accessible through APIs. I had the idea to implement a Ruby-based DSL that would make using all those APIs for ad-hoc changes effortless, e.g.:

Cluster :UK, :uk01 do
  Customer.C1A1.segments.volumes.each do |volume|
    puts volume.usage_stats
    volume.move_off! if volume.over_subscribed?
  end
end

I am abandoning this project because my workplace has stopped the annual pet project competition, and I have other more important projects to work on at the moment.

Creative universe (Work pet project contests)

Self-hosting projects I don't have time for



My own Matrix server



I value privacy. It would be great to run my own Matrix server for communication within my family. I have yet to have time to look into this more closely.

https://matrix.org

Ampache music server



Ampache is an open-source music streaming server that allows you to host and manage your music collection online, accessible via a web interface. Setting it up involves configuring a web server, installing Ampache, and organising your music files, which can be time-consuming.

Librum eBook reader



Librum is a self-hostable e-book reader that allows users to manage and read their e-book collection from a web interface. Designed to be a self-contained platform where users can upload, organise, and access their e-books, Librum emphasises privacy and control over one's digital library.

https://github.com/Librum-Reader/Librum

I am using my Kobo devices or my laptop to read these kinds of things for now.

Memos - Note-taking service



Memos is a note-taking service that simplifies and streamlines information capture and organisation. It focuses on providing users with a minimalistic and intuitive interface, aiming to enhance productivity without the clutter commonly associated with more complex note-taking apps.

https://www.usememos.com

I am abandoning this idea for now, as I am currently using plain Markdown files for notes and syncing them with Syncthing across my devices.

Bepasty server



Bepasty is like a Pastebin for all kinds of files (text, image, audio, video, documents, binary, etc.). It seems very neat, but I only share a little nowadays. When I do, I upload files via SCP to one of my OpenBSD VMs and serve them via vanilla httpd there, keeping it KISS.

https://github.com/bepasty/bepasty-server

Books I don't have time to read



Fluent Python



I consider myself an advanced programmer in Ruby, Bash, and Perl. However, Python seems to be ubiquitous nowadays, and most of my colleagues prefer Python over any other languages. Thus, it makes sense for me to also learn and use Python. After conducting some research, "Fluent Python" appears to be the best book for this purpose.

I don't have time to read this book at the moment, as I am focusing more on Go (Golang) and I know just enough Python to get by (e.g., for code reviews). Additionally, there are still enough colleagues around who can review my Ruby or Bash code.

Programming Ruby



I've read a couple of Ruby books already, but "Programming Ruby," which covers up to Ruby 3.2, was just recently released. I would like to read this to deepen my Ruby knowledge further and to revisit some concepts that I may have forgotten.

As stated in this blog post, I am currently more eager to focus on Go, so I've put the Ruby book on hold. Additionally, there wouldn't be enough colleagues who could "understand" my advanced Ruby skills anyway, as most of them are either Java developers or SREs who don't code a lot.

Peter F. Hamilton science fiction books



I am a big fan of science fiction, but my reading list is currently too long anyway. So, I've put the Hamilton books on the back burner for now. You can see all the novels I've read here:

https://paul.buetow.org/novels.html
gemini://paul.buetow.org/novels.gmi


New websites I don't have time for



Create a "Why Raku Rox" site



The website "Why Raku Rox" would showcase the unique features and benefits of the Raku programming language and highlight why it is an exceptional choice for developers. Raku, originally known as Perl 6, is a dynamic, expressive language designed for flexible and powerful software development.

This would be similar to the "Why OpenBSD rocks" site:

https://why-openbsd.rocks
https://raku.org

I am not working on this for now, as I currently don’t even have time to program in Raku.

Research projects I don't have time for



Project secure



For work: Implement a PoC that dumps Java heaps to extract secrets from memory. Based on the findings, write a Java program that encrypts secrets in the kernel using the memfd_secret() syscall to make it even more secure.

https://lwn.net/Articles/865256/

Due to other priorities, I am putting this on hold for now. The software we have built is pretty damn secure already!

CPU utilisation is all wrong



This research project, based on Brendan Gregg's blog post, could potentially significantly impact my work.

https://brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html

The research project would involve setting up dashboards that display actual CPU usage and the cycles versus waiting time for memory access.

E-Mail your comments to paul@nospam.buetow.org :-)

Related and maybe interesting:

Sweating the small stuff - Tiny projects of mine

Back to the main site
'Slow Productivity' book notes gemini://foo.zone/gemfeed/2024-05-01-slow-productivity-book-notes.gmi 2024-04-27T14:18:51+03:00 Paul Buetow aka snonux paul@dev.buetow.org These are my personal takeaways after reading 'Slow Productivity - The lost Art of Accomplishment Without Burnout' by Cal Newport.

"Slow Productivity" book notes



Published at 2024-04-27T14:18:51+03:00

These are my personal takeaways after reading "Slow Productivity - The lost Art of Accomplishment Without Burnout" by Cal Newport.

The case studies in this book were a bit long, but they appeared to be well-researched. I will only highlight the interesting, actionable items in the book notes.

These notes are mainly for my own use, but you may find them helpful.

         ,..........   ..........,
     ,..,'          '.'          ',..,
    ,' ,'            :            ', ',
   ,' ,'             :             ', ',
  ,' ,'              :              ', ',
 ,' ,'............., : ,.............', ',
,'  '............   '.'   ............'  ',
 '''''''''''''''''';''';''''''''''''''''''
                    '''

Table of Contents




It's not "slow productivity"



"Slow productivity" does not mean being less productive. Cal Newport wants to point out that you can be much more productive with "slow productivity" than you would be without it. It is a different way of working than most of us are used to in the modern workplace, which is hyper-connected and always online.

Pseudo-productivity and Shallow work



People use visible activity instead of real productivity because it's easier to measure. This is called pseudo-productivity.
Pseudo-productivity is used as a proxy for real productivity. If you don't look busy, you are dismissed as lazy or lacking a work ethic.

There is a tendency to perform shallow work because people will otherwise dismiss you as lazy. A lot of shallow work can cause burnout, as multiple things are often being worked on in parallel. The more you have on your plate, the more stressed you will be.

Shallow work usually doesn't help you to accomplish big things. Always have the big picture in mind. Shallow work can't be entirely eliminated, but it can be managed—for example, plan dedicated time slots for certain types of shallow work.

Accomplishments without burnout



The overall perception is that if you want to accomplish something, you must put yourself on the verge of burnout. Cal Newport writes about "The lost Art of Accomplishments without Burnouts", where you can accomplish big things without all the stress usually involved.

There are three principles for the maintenance of a sustainable work life:

  • Do fewer things
  • Work at a natural pace
  • Obsess over quality

Do fewer things



There will always be more work. The faster you finish it, the quicker you will have something new on your plate.

Reduce the overhead tax. The overhead tax is all the administrative work to be done. With every additional project, there will also be more administrative stuff to be done on your work plate. So, doing fewer things leads to more and better output and better quality for the projects you are working on.

Limit the things on your plate. Limit your missions (personal goals, professional goals). Reduce your main objectives in life. More than five missions are usually not sustainable very easily, so you have to really prioritise what is important to you and your professional life.

A mission is an overall objective/goal that can have multiple projects. Limit the projects as well. Some projects need clear endings (e.g., work in support of a never-ending flow of incoming requests). In this case, set limits (e.g., time box your support hours). You can also plan "office hours" for collaborative work with colleagues to avoid ad hoc distractions.

The key point is that after making these commitments, you really deliver on them. This builds trust, and people will leave you alone and not ask for progress all the time.

Doing fever things is essential for modern knowledge workers. Breathing space in your work also makes you more creative and happier overall.

Pushing workers more work can make them less productive, so the better approach is the pull model, where workers pull in new work when the previous task is finished.

If you can quantify how busy you are or how many other projects you already work on, then it is easier to say no to new things. For example, show what you are doing, what's in the roadmap, etc. Transparency is the key here.

You can have your own simulated pull system if the company doesn't agree to a global one:

  • State which additional information you would need.
  • Create a rough estimate of when you will be able to work on it
  • Estimate how long the project would take. Double that estimate, as humans are very bad estimators.
  • Respond to the requester and state that you will let him know when the estimates change.

Sometimes, a little friction is all that is needed to combat incoming work, e.g., when your manager starts seeing the reality of your work plate, and you also request additional information for the task. If you already have too much on your plate, then decline the new project or make room for it in your calendar. If you present a large task list, others will struggle to assign more to you.

Limit your daily goals. A good measure is to focus on one goal per day. You can time block time for deep work on your daily goal. During that time, you won't be easily available to others.

The battle against distractions must be fought to be the master of your time. Nobody will fight this war for you. You have to do it for yourself. (Also, have a look at Cal Newport's "time block planning" method).

Put tasks on autopilot (regular recurring tasks).

Work at a natural pace



We suffer from overambitious timelines, task lists, and business. Focus on what matters. Don't rush your most important work to achieve better results.

Don't rush. If you rush or are under pressure, you will be less effective and eventually burn out. Our brains work better then not rushy. The stress heuristic usually indicates too much work, and it is generally too late to reduce workload. That's why we all typically have dangerously too much to do.

Have the courage to take longer to do things that are important. For example, plan on a yearly and larger scale, like 2 to 5 years.

Find a reasonable time for a project and then double the project timeline against overconfident optimism. Humans are not great at estimating. They gravitate towards best-case estimates. If you have planned more than enough time for your project, then you will fall into a natural work pace. Otherwise, you will struggle with rushing and stress.

Some days will still be intense and stressful, but those are exceptional cases. After those exceptions (e.g., finalizing that thing, etc.), calmer periods will follow again.

Pace yourself over modest results over time. Simplify and reduce the daily task lists. Meetings: Certain hours are protected for work. For each meeting, add a protected block to your calendar, so you attend meetings only half a day max.

Schedule slow seasons (e.g., when on vacation). Disconnect in the slow season. Doing nothing will not satisfy your mind, though. You could read a book on your subject matter to counteract that.

Obsess over quality



Obsess over quality even if you lose short-term opportunities by rejecting other projects. Quality demands you slow down. The two previous two principles (do fewer things and work at a natural pace) are mandatory for this principle to work:

  • Focus on the core activities of your work for your obsession - you will only have the time to obsess over some things.
  • Deliver solid work with good quality.
  • Sharpen the focus to do the best work possible.

Go pro to save time, and don't squeeze everything out that you can from freemium services. Professional software services eliminate administrative work:

  • Pay people who know what they are doing and focus on your stuff.
  • For example, don't repair that car if you know the mechanic can do that much better than you.
  • Or don't use the free version of the music streaming service if it interrupts you with commercials, hindering your ability to concentrate on your work.
  • Hire an accountant for your yearly tax returns. He knows much more about that stuff than you do. And in the end, he will even be cheaper as he knows all the tax laws.
  • ...

Adjust your workplace to what you want to accomplish. You could have dedicated places in your home for different things, e.g., a place where you read and think (armchair) and a place where you collaborate (your desk or whiteboard). Surround yourself with things that inspire you (e.g., your favourite books on your shelf next to you, etc.).

There is the concept of quiet quitting. It doesn't mean quitting your job, but it means that you don't go beyond and above the expectations people have of you. Quiet quitting became popular with modern work, which is often meaningless and full of shallow tasks. If you obsess over quality, you enjoy your craft and want to go beyond and above.

Implement rituals and routines which shift you towards your goals:

  • For example, if you want to be a good Software Engineer, you also have to put in the work regularly. For instance, progress a bit every day in your project at hand, even if it is only one hour daily. Also, a little quality daily work will be more satisfying over time than many shallow tasks.
  • Do you want to be lean and/or healthy? Schedule your daily walks and workouts. They will become habits over time.
  • There's the compounding effect where every small effort made every day will yield significant results in the long run

Deciding what not to do is as important as deciding what to do.

It appears to be money thrown out of the window, but you get a $50 expensive paper notebook (and also a good pen). Unconsciously, it will make you take notes more seriously. You will think about what to put into the notebooks more profoundly and have thought through the ideas more intensively. If you used very cheap notebooks, you would scribble a lot of rubbish and wouldn't even recognise your handwriting after a while anymore. So choosing a high-quality notebook will help you to take higher-quality notes, too.

Slow productivity is actionable and can be applied immediately.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2025-06-07 "A Monk's Guide to Happiness" book notes
2025-04-19 "When: The Scientific Secrets of Perfect Timing" book notes
2024-10-24 "Staff Engineer" book notes
2024-07-07 "The Stoic Challenge" book notes
2024-05-01 "Slow Productivity" book notes (You are currently reading this)
2023-11-11 "Mind Management" book notes
2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes
2023-05-06 "The Obstacle is the Way" book notes
2023-04-01 "Never split the difference" book notes
2023-03-16 "The Pragmatic Programmer" book notes

Back to the main site
KISS high-availability with OpenBSD gemini://foo.zone/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.gmi 2024-03-30T22:12:56+02:00 Paul Buetow aka snonux paul@dev.buetow.org I have always wanted a highly available setup for my personal websites. I could have used off-the-shelf hosting solutions or hosted my sites in an AWS S3 bucket. I have used technologies like (in unsorted and slightly unrelated order) BGP, LVS/IPVS, ldirectord, Pacemaker, STONITH, scripted VIP failover via ARP, heartbeat, heartbeat2, Corosync, keepalived, DRBD, and commercial F5 Load Balancers for high availability at work.

KISS high-availability with OpenBSD



Published at 2024-03-30T22:12:56+02:00

I have always wanted a highly available setup for my personal websites. I could have used off-the-shelf hosting solutions or hosted my sites in an AWS S3 bucket. I have used technologies like (in unsorted and slightly unrelated order) BGP, LVS/IPVS, ldirectord, Pacemaker, STONITH, scripted VIP failover via ARP, heartbeat, heartbeat2, Corosync, keepalived, DRBD, and commercial F5 Load Balancers for high availability at work.

But still, my personal sites were never highly available. All those technologies are great for professional use, but I was looking for something much more straightforward for my personal space - something as KISS (keep it simple and stupid) as possible.

It would be fine if my personal website wasn't highly available, but the geek in me wants it anyway.

PS: ASCII-art below reflects an OpenBSD under-water world with all the tools available in the base system.

Art by Michael J. Penick (mod. by Paul B.)
                                               ACME-sky
        __________
       / nsd tower\                                             (
      /____________\                                           (\) awk-ward
       |:_:_:_:_:_|                                             ))   plant
       |_:_,--.:_:|                       dig-bubble         (\//   )
       |:_:|__|_:_|  relayd-castle          _               ) ))   ((
    _  |_   _  :_:|   _   _   _            (_)             ((((   /)\`
   | |_| |_| |   _|  | |_| |_| |             o              \\)) (( (
    \_:_:_:_:/|_|_|_|\:_:_:_:_/             .                ((   ))))
     |_,-._:_:_:_:_:_:_:_.-,_|                                )) ((//
     |:|_|:_:_:,---,:_:_:|_|:|                               ,-.  )/
     |_:_:_:_,'puffy `,_:_:_:_|           _  o               ,;'))((
     |:_:_:_/  _ | _  \_:_:_:|          (_O                   ((  ))
_____|_:_:_|  (o)-(o)  |_:_:_|--'`-.     ,--. ksh under-water (((\'/
 ', ;|:_:_:| -( .-. )- |:_:_:| ', ; `--._\  /,---.~  goat     \`))
.  ` |_:_:_|   \`-'/   |_:_:_|.  ` .  `  /()\.__( ) .,-----'`-\(( sed-root
 ', ;|:_:_:|    `-'    |:_:_:| ', ; ', ; `--'|   \ ', ; ', ; ',')).,--
.  ` MJP ` .  ` .  ` .  ` . httpd-soil ` .    .  ` .  ` .  ` .  ` .  `
 ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ;


Table of Contents




My auto-failover requirements



  • Be OpenBSD-based (I prefer OpenBSD because of the cleanliness and good documentation) and rely on as few external packages as possible.
  • Don't rely on the hottest and newest tech (don't want to migrate everything to a new and fancier technology next month already!).
  • It should be reasonably cheap. I want to avoid paying a premium for floating IPs or fancy Elastic Load Balancers.
  • It should be geo-redundant.
  • It's fine if my sites aren't reachable for five or ten minutes every other month. Due to their static nature, I don't care if there's a split-brain scenario where some requests reach one server and other requests reach another server.
  • Failover should work for both HTTP/HTTPS and Gemini protocols. My self-hosted MTAs and DNS servers should also be highly available.
  • Let's Encrypt TLS certificates should always work (before and after a failover).
  • Have good monitoring in place so I know when a failover was performed and when something went wrong with the failover.
  • Don't configure everything manually. The configuration should be automated and reproducible.

My HA solution



Only OpenBSD base installation required



My HA solution for Web and Gemini is based on DNS (OpenBSD's nsd) and a simple shell script (OpenBSD's ksh and some little sed and awk and grep). All software used here is part of the OpenBSD base system and no external package needs to be installed - OpenBSD is a complete operating system.

https://man.OpenBSD.org/nsd.8
https://man.OpenBSD.org/ksh
https://man.OpenBSD.org/awk
https://man.OpenBSD.org/sed
https://man.OpenBSD.org/dig
https://man.OpenBSD.org/ftp
https://man.OpenBSD.org/cron

I also used the dig (for DNS checks) and ftp (for HTTP/HTTPS checks) programs.

The DNS failover is performed automatically between the two OpenBSD VMs involved (my setup doesn't require any quorum for a failover, so there isn't a need for a 3rd VM). The ksh script, executed once per minute via CRON (on both VMs), performs a health check to determine whether the current master node is available. If the current master isn't available (no HTTP response as expected), a failover is performed to the standby VM:

#!/bin/ksh

ZONES_DIR=/var/nsd/zones/master/
DEFAULT_MASTER=fishfinger.buetow.org
DEFAULT_STANDBY=blowfish.buetow.org

determine_master_and_standby () {
    local master=$DEFAULT_MASTER
    local standby=$DEFAULT_STANDBY

    .
    .
    .
    
    local -i health_ok=1
    if ! ftp -4 -o - https://$master/index.txt | grep -q "Welcome to $master"; then
        echo "https://$master/index.txt IPv4 health check failed"
        health_ok=0
    elif ! ftp -6 -o - https://$master/index.txt | grep -q "Welcome to $master"; then
        echo "https://$master/index.txt IPv6 health check failed"
        health_ok=0
    fi
    if [ $health_ok -eq 0 ]; then
        local tmp=$master
        master=$standby
        standby=$tmp
    fi

    .
    .
    .
}

The failover scripts looks for the ; Enable failover string in the DNS zone files and swaps the A and AAAA records of the DNS entries accordingly:

fishfinger$ grep failover /var/nsd/zones/master/foo.zone.zone
        300 IN A 46.23.94.99 ; Enable failover
        300 IN AAAA 2a03:6000:6f67:624::99 ; Enable failover
www     300 IN A 46.23.94.99 ; Enable failover
www     300 IN AAAA 2a03:6000:6f67:624::99 ; Enable failover
standby  300 IN A 23.88.35.144 ; Enable failover
standby  300 IN AAAA 2a01:4f8:c17:20f1::42 ; Enable failover

transform () {
  sed -E '
	/IN A .*; Enable failover/ {
	    /^standby/! {
	        s/^(.*) 300 IN A (.*) ; (.*)/\1 300 IN A '$(cat /var/nsd/run/master_a)' ; \3/;
	    }
	    /^standby/ {
	        s/^(.*) 300 IN A (.*) ; (.*)/\1 300 IN A '$(cat /var/nsd/run/standby_a)' ; \3/;
	    }
	}
	/IN AAAA .*; Enable failover/ {
	    /^standby/! {
	        s/^(.*) 300 IN AAAA (.*) ; (.*)/\1 300 IN AAAA '$(cat /var/nsd/run/master_aaaa)' ; \3/;
	    }
	    /^standby/ {
	        s/^(.*) 300 IN AAAA (.*) ; (.*)/\1 300 IN AAAA '$(cat /var/nsd/run/standby_aaaa)' ; \3/;
	    }
	}
	/ ; serial/ {
	    s/^( +) ([0-9]+) .*; (.*)/\1 '$(date +%s)' ; \3/;
	}
  '
}

After the failover, the script reloads nsd and performs a sanity check to see if DNS still works. If not, a rollback will be performed:

#! Race condition !#
   
if [ -f $zone_file.bak ]; then
    mv $zone_file.bak $zone_file
fi

cat $zone_file | transform > $zone_file.new.tmp 

grep -v ' ; serial' $zone_file.new.tmp > $zone_file.new.noserial.tmp
grep -v ' ; serial' $zone_file > $zone_file.old.noserial.tmp

echo "Has zone $zone_file changed?"
if diff -u $zone_file.old.noserial.tmp $zone_file.new.noserial.tmp; then
    echo "The zone $zone_file hasn't changed"
    rm $zone_file.*.tmp
    return 0
fi

cp $zone_file $zone_file.bak
mv $zone_file.new.tmp $zone_file
rm $zone_file.*.tmp
echo "Reloading nsd"
nsd-control reload

if ! zone_is_ok $zone; then
    echo "Rolling back $zone_file changes"
    cp $zone_file $zone_file.invalid
    mv $zone_file.bak $zone_file
    echo "Reloading nsd"
    nsd-control reload
    zone_is_ok $zone
    return 3
fi

for cleanup in invalid bak; do
    if [ -f $zone_file.$cleanup ]; then
        rm $zone_file.$cleanup
    fi
done

echo "Failover of zone $zone to $MASTER completed"
return 1

A non-zero return code (here, 3 when a rollback and 1 when a DNS failover was performed) will cause CRON to send an E-Mail with the whole script output.

The authorative nameserver for my domains runs on both VMs, and both are configured to be a "master" DNS server so that they have their own individual zone files, which can be changed independently. Otherwise, my setup wouldn't work. The side effect is that under a split-brain scenario (both VMs cannot see each other), both would promote themselves to master via their local DNS entries. More about that later, but that's fine in my use case.

Check out the whole script here:

dns-failover.ksh

Fairly cheap and geo-redundant



I am renting two small OpenBSD VMs: One at OpenBSD Amsterdam and the other at Hetzner Cloud. So, both VMs are hosted at another provider, in different IP subnets, and in different countries (the Netherlands and Germany).

https://OpenBSD.Amsterdam
https://www.Hetzner.cloud

I only have a little traffic on my sites. I could always upload the static content to AWS S3 if I suddenly had to. But this will never be required.

A DNS-based failover is cheap, as there isn't any BGP or fancy load balancer to pay for. Small VMs also cost less than millions.

Failover time and split-brain



A DNS failover doesn't happen immediately. I've configured a DNS TTL of 300 seconds, and the failover script checks once per minute whether to perform a failover or not. So, in total, a failover can take six minutes (not including other DNS caching servers somewhere in the interweb, but that's fine - eventually, all requests will resolve to the new master after a failover).

A split-brain scenario between the old master and the new master might happen. That's OK, as my sites are static, and there's no database to synchronise other than HTML, CSS, and images when the site is updated.

Failover support for multiple protocols



With the DNS failover, HTTP, HTTPS, and Gemini protocols are failovered. This works because all domain virtual hosts are configured on either VM's httpd (OpenBSD's HTTP server) and relayd (it's also part of OpenBSD and I use it to TLS offload the Gemini protocol). So, both VMs accept requests for all the hosts. It's just a matter of the DNS entries, which VM receives the requests.

https://man.OpenBSD.org/httpd.8
https://man.OpenBSD.org/relayd.8

For example, the master is responsible for the https://www.foo.zone and https://foo.zone hosts, whereas the standby can be reached via https://standby.foo.zone (port 80 for plain HTTP works as well). The same principle is followed with all the other hosts, e.g. irregular.ninja, paul.buetow.org and so on. The same applies to my Gemini capsules for gemini://foo.zone, gemini://standby.foo.zone, gemini://paul.buetow.org and gemini://standby.paul.buetow.org.

On DNS failover, master and standby swap roles without config changes other than the DNS entries. That's KISS (keep it simple and stupid)!

Let's encrypt TLS certificates



All my hosts use TLS certificates from Let's Encrypt. The ACME automation for requesting and keeping the certificates valid (up to date) requires that the host requesting a certificate from Let's Encrypt is also the host using that certificate.

If the master always serves foo.zone and the standby always standby.foo.zone, then there would be a problem after the failover, as the new master wouldn't have a valid certificate for foo.zone and the new standby wouldn't have a valid certificate for standby.foo.zone which would lead to TLS errors on the clients.

As a solution, the CRON job responsible for the DNS failover also checks for the current week number of the year so that:

  • In an odd week number, the first server is the default master
  • In an even week number, the second server is the default master.

Which translates to:

# Weekly auto-failover for Let's Encrypt automation
local -i -r week_of_the_year=$(date +%U)
if [ $(( week_of_the_year % 2 )) -eq 0 ]; then
    local tmp=$master
    master=$standby
    standby=$tmp
fi

This way, a DNS failover is performed weekly so that the ACME automation can update the Let's Encrypt certificates (for master and standby) before they expire on each VM.

The ACME automation is yet another daily CRON script /usr/local/bin/acme.sh. It iterates over all of my Let's Encrypt hosts, checks whether they resolve to the same IP address as the current VM, and only then invokes the ACME client to request or renew the TLS certificates. So, there are always correct requests made to Let's Encrypt.

Let's encrypt certificates usually expire after 3 months, so a weekly failover of my VMs is plenty.

acme.sh.tpl - Rex template for the acme.sh script of mine.
https://man.OpenBSD.org/acme-client.1
Let's Encrypt with OpenBSD and Rex

Monitoring



CRON is sending me an E-Mail whenever a failover is performed (or whenever a failover failed). Furthermore, I am monitoring my DNS servers and hosts through Gogios, the monitoring system I have developed.

https://codeberg.org/snonux/gogios
KISS server monitoring with Gogios

Gogios, as I developed it by myself, isn't part of the OpenBSD base system.

Rex automation



I use Rexify, a friendly configuration management system that allows automatic deployment and configuration.

https://www.rexify.org
codeberg.org/snonux/rexfiles/frontends

Rex isn't part of the OpenBSD base system, but I didn't need to install any external software on OpenBSD either as Rex is invoked from my Laptop!

More HA



Other high-available services running on my OpenBSD VMs are my MTAs for mail forwarding (OpenSMTPD - also part of the OpenBSD base system) and the authoritative DNS servers (nsd) for all my domains. No particular HA setup is required, though, as the protocols (SMTP and DNS) already take care of the failover to the next available host!

https://www.OpenSMTPD.org/

As a password manager, I use geheim, a command-line tool I wrote in Ruby with encrypted files in a git repository (I even have it installed in Termux on my Phone). For HA reasons, I simply updated the client code so that it always synchronises the database with both servers when I run the sync command there.

https://codeberg.org/snonux/geheim

E-Mail your comments to paul@nospam.buetow.org :-)

Other *BSD and KISS related posts are:

2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-04-01 KISS high-availability with OpenBSD (You are currently reading this)
2024-01-13 One reason why I love OpenBSD
2023-10-29 KISS static web photo albums with photoalbum.sh
2023-06-01 KISS server monitoring with Gogios
2022-10-30 Installing DTail on OpenBSD
2022-07-30 Let's Encrypt with OpenBSD and Rex
2016-04-09 Jails and ZFS with Puppet on FreeBSD

Back to the main site
A fine Fyne Android app for quickly logging ideas programmed in Go gemini://foo.zone/gemfeed/2024-03-03-a-fine-fyne-android-app-for-quickly-logging-ideas-programmed-in-golang.gmi 2024-03-03T00:07:21+02:00 Paul Buetow aka snonux paul@dev.buetow.org I am an ideas person. I find myself frequently somewhere on the streets with an idea in my head but no paper journal noting it down.

A fine Fyne Android app for quickly logging ideas programmed in Go



Published at 2024-03-03T00:07:21+02:00

I am an ideas person. I find myself frequently somewhere on the streets with an idea in my head but no paper journal noting it down.

I have tried many note apps for my Android (I use GrapheneOS) phone. Most of them either don't do what I want, are proprietary software, require Google Play services (I have the main profile on my phone de-googled) or are too bloated. I was never into mobile app development, as I'm not too fond of the complexity of the developer toolchains. I don't want to use Android Studio (as a NeoVim user), and I don't want to use Java or Kotlin. I want to use a language I know (and like) for mobile app development. Go would be one of those languages.

Quick logger Logo

Table of Contents




Enter Quick logger



Enter Quick logger – a compact GUI Android (well, cross-platform due to Fyne) app I've crafted using Go and the nifty Fyne framework. With Fyne, the app can be compiled easily into an Android APK. As of this writing, this app's whole Go source code is only 75 lines short!! This little tool is designed for spontaneous moments, allowing me to quickly log my thoughts as plain text files on my Android phone. There are no fancy file formats. Just plain text!

https://codeberg.org/snonux/quicklogger
https://fyne.io
https://go.dev

There's no need to navigate complex menus or deal with sync issues. I jot down my Idea, and Quick logger saves it to a plain text file in a designated local folder on my phone. There is one text file per note (timestamp in the file name). Once logged, the file can't be edited anymore (it keeps it simple). If I want to correct or change a note, I simply write a new one. My notes are always small (usually one short sentence each), so there isn't the need for an edit functionality. I can edit them later on my actual computer if I want to.

With Syncthing, the note files are then synchronised to my home computer to my ~/Notes directory. From there, a small glue Raku script adds them to my Taskwarrior DB so that I can process them later (e.g. take action on that one Idea I had). That then will delete the original note files from my computer and also (through Syncthing) from my phone.

https://syncthing.net
https://raku.org
https://taskwarrior.org

Quick logger's user interface is as minimal as it gets. When I launch Quick logger, I'm greeted with a simple window where I can type plain text. Hit the "Log text" button, and voilà – the input is timestamped and saved as a file in my chosen directory. If I need to change the directory, the "Preferences" button brings up a window where I can set the notes folder and get back to logging.

For the code-savvy folks out there, Quick logger is a neat example of what you can achieve with Go and Fyne. It's a testament to building functional, cross-platform apps without getting bogged down in the nitty-gritty of platform-specific details. Thanks to Fyne, I am pleased with how easy it is to make mobile Android apps in Go.

Quick logger running on Android

My Android apps will never be polished, but they will get the job done, and this is precisely how I want them to be. Minimalistic but functional. I could spend more time polishing Quick logger, but my Quick logger app then may be the same as any other notes app out there (complicated or bloated).

All easy-peasy?



Updated 2025-05-15: When using fyne-cross android everything works now! I don't have to perform any of the work-arounds listed below anymore!

I did have some issues with the app logo for Android, though. Android always showed the default app icon and not my custom icon whenever I used a custom AndroidManifest.xml for custom app storage permissions. Without a custom AndroidAmnifest.xml the app icon would be displayed under Android, but then the app would not have the MANAGE_EXTERNAL_STORAGE permission, which is required for Quick logger to write to a custom directory. I found a workaround, which I commented on here at Github:

https://github.com/fyne-io/fyne/issues/3077#issuecomment-1912697360

What worked however (app icon showing up) was to clone the fyne project, change the occurances of android.permission.INTERNET to android.permission.MANAGE_EXTERNAL_STORAGE (as these are all the changes I want in my custom android manifest) in the source tree, re-compile fyne. Now all works. I know, this is more of an hammer approach!

Hopefully, I won't need to use this workaround anymore. But for now, it is a fair tradeoff for what I am getting.

I hope this will inspire you to write your own small mobile apps in Go using the awesome Fyne framework! PS: The Quick logger logo was generated by ChatGPT.

E-Mail your comments to paul@nospam.buetow.org :-)

Other Go related posts are:

2024-03-03 A fine Fyne Android app for quickly logging ideas programmed in Go (You are currently reading this)

Back to the main site
From `babylon5.buetow.org` to `*.buetow.cloud` gemini://foo.zone/gemfeed/2024-02-04-from-babylon5.buetow.org-to-.cloud.gmi 2024-02-04T00:50:50+02:00 Paul Buetow aka snonux paul@dev.buetow.org Recently, my employer sent me to a week-long AWS course. After the course, there wasn't any hands-on project I could dive into immediately, so I moved parts of my personal infrastructure to AWS to level up a bit through practical hands-on.

From babylon5.buetow.org to *.buetow.cloud



Published at 2024-02-04T00:50:50+02:00

Recently, my employer sent me to a week-long AWS course. After the course, there wasn't any hands-on project I could dive into immediately, so I moved parts of my personal infrastructure to AWS to level up a bit through practical hands-on.

So, I migrated all of my Docker-based self-hosted services to AWS. Usually, I am not a big fan of big cloud providers and instead use smaller hosters or indie providers and self-made solutions. However, I also must go with the times and try out technologies currently hot on the job market. I don't want to become the old man who yells at cloud :D

Old man yells at cloud

Table of Contents




The old *.buetow.org way



Before the migration, all those services were reachable through buetow.org-subdomains (Buetow is my last name) and ran on Docker containers on a single Rocky Linux 9 VM at Hetzner. And there was a Nginx reverse proxy with TLS offloading (with Let's Encrypt certificates). The Rocky Linux 9's hostname was babylon5.buetow.org (based on the Science Fiction series).

https://en.wikipedia.org/wiki/Babylon_5

The downsides of this setup were:

  • Not highly available. If the server goes down, no service is reachable until it's repaired. To be fair, the Hetzner cloud VM is redundant by itself and would have re-spawned on a different worker node, I suppose.
  • Manual installation.

About the manual installation part: I could have used a configuration management system like Rexify, Puppet, etc. But I decided against it back in time, as setting up Docker containers isn't so complicated through simple start scripts. And it's only a single Linux box where a manual installation is less painful. However, regular backups (which Hetzner can do automatically for you) were a must.

The benefits of this setup were:

  • KISS (Keep it Simple Stupid)
  • Cheap

I kept my buetow.org OpenBSD boxes alive



As pointed out, I only migrated the Docker-based self-hosted services (which run on the Babylon 5 Rocky Linux box) to AWS. Many self-hostable apps come with ready-to-use container images, making deploying them easy.

My other two OpenBSD VMs (blowfish.buetow.org, hosted at Hetzner, and fishfinger.buetow.org, hosted at OpenBSD Amsterdam) still run (and they will keep running) the following services:

  • HTTP server for my websites (e.g. https://foo.zone, ...)
  • ACME for Let's Encrypt TLS certificate auto-renewal.
  • Gemini server for my capsules (e.g. gemini://foo.zone)
  • Authoritative DNS servers for my domains (but buetow.cloud, which is on Route 53 now)
  • Mail transfer agent (MTA)
  • My Gogios monitoring system.
  • My IRC bouncer.

It is all automated with Rex, aka Rexify. This OpenBSD setup is my "fun" or "for pleasure" setup. Whereas the Rocky Linux 9 one I always considered the "pratical means to the end"-setup to have 3rd party Docker containers up and running with as little work as possible.

(R)?ex, the friendly automation framework
KISS server monitoring with Gogios
Let's encrypt with OpenBSD and Rex

The new *.buetow.cloud way



With AWS, I decided to get myself a new domain name, as I could fully separate my AWS setup from my conventional setup and give Route 53 as an authoritative DNS a spin.

I decided to automate everything with Terraform, as I wanted to learn to use it as it appears standard now in the job market.

All services are installed automatically to AWS ECS Fargate. ECS is AWS's Elastic Container Service, and Fargate automatically manages the underlying hardware infrastructure (e.g., how many CPUs, RAM, etc.) for me. So I don't have to bother about having enough EC2 instances to serve my demands, for example.

The authoritative DNS for the buetow.cloud domain is AWS Route 53. TLS certificates are free here at AWS and offloaded through the AWS Application Load Balancer. The LB acts as a proxy to the ECS container instances of the services. A few services I run in ECS Fargate also require the AWS Network Load Balancer.

All services require some persistent storage. For that, I use an encrypted EFS file system, automatically replicated across all AZs (availability zones) of my region of choice, eu-central-1.

In case of an AZ outage, I could re-deploy all the failed containers in another AZ, and all the data would still be there.

The EFS automatically gets backed up by AWS for me following their standard Backup schedule. The daily backups are kept for 30 days.

Domain registration, TLS certificate configuration and configuration of the EFS backup were quickly done through the AWS web interface. These were only one-off tasks, so they weren't fully automated through Terraform.

You can find all Terraform manifests here:

https://codeberg.org/snonux/terraform

Whereas:

  • org-buetow-base sets up the bare VPC (IPv4 and IPv6 subnets in 3 AZs, EFS, ECR (the AWS container registry for some self-built containers) and Route 53 zone. It's the requirement for most other Terraform manifests in this repository.
  • org-buetow-bastion sets up a minimal Amazon Linux EC2 instance where I can manually SSH into and look at the EFS file system (if required).
  • org-buetow-elb sets up the Elastic Load Balancer, a prerequisite for any service running in ECS Fargate.
  • org-buetow-ecs finally sets up and deploys all the Docker apps mentioned above. Any apps can be turned on or off via the variables.tf file.

The container apps



And here, finally, is the list of all the container apps my Terraform manifests deploy. The FQDNs here may not be reachable. I spin them up only on demand (for cost reasons). All services are fully dual-stacked (IPv4 & IPv6).

flux.buetow.cloud



Miniflux is a minimalist and opinionated feed reader. With the move to AWS, I also retired my bloated instance of NextCloud. So, with Miniflux, I retired from NextCloud News.

Miniflux requires two ECS containers. One is the Miniflux app, and the other is the PostgreSQL DB.

https://miniflux.app/


audiobookshelf.buetow.cloud



Audiobookshelf was the first Docker app I installed. It is a Self-hosted audiobook and podcast server. It comes with a neat web interface, and there is also an Android app available, which works also in offline mode. This is great, as I only have the ECS instance sometimes running for cost savings.

With Audiobookshelf, I replaced my former Audible subscription and my separate Podcast app. For Podcast synchronisation I used to use the Gpodder NextCloud sync app. But that one I retired now with Audiobookshelf as well :-)

https://www.audiobookshelf.org

syncthing.buetow.cloud



Syncthing is a continuous file synchronisation program. In real-time, it synchronises files between two or more computers, safely protected from prying eyes. Your data is your own, and you deserve to choose where it is stored, whether it is shared with some third party, and how it's transmitted over the internet.

With Syncthing, I retired my old NextCloud Files and file sync client on all my devices. I also quit my NextCloud Notes setup. All my Notes are now plain Markdown files in a Notes directory. On Android, I can edit them with any text or Markdown editor (e.g. Obsidian), and they will be synchronised via Syncthing to my other computers, both forward and back.

I use Syncthing to synchronise some of my Phone's data (e.g. Notes, Pictures and other documents). Initially, I synced all of my pictures, videos, etc., with AWS. But that was pretty expensive. So for now, I use it only whilst travelling. Otherwise, I will use my Syncthing instance here on my LAN (I have a cheap cloud backup in AWS S3 Glacier Deep Archive, but that's for another blog post).

https://syncthing.net/

radicale.buetow.cloud



Radicale is an excellent minimalist WebDAV calendar and contact synchronisation server. It was good enough to replace my NextCloud Calendar and NextCloud Contacts setup. Unfortunately, there wasn't a ready-to-use Docker image. So, I created my own.

On Android, it works great together with the DAVx5 client for synchronisation.

https://radicale.org/
https://codeberg.org/snonux/docker-radicale-server
https://www.davx5.com/

bag.buetow.cloud



Wallabag is a self-hostable "save now - read later" service, and it also comes with an Android app which also has an offline mode. Think of Getpocket, but open-source!

https://wallabag.org/
https://github.com/wallabag/wallabag

anki.buetow.cloud



Anki is a great (the greatest) flash-card learning program. I am currently learning Bulgarian as my 3rd language. There is also an Android app that has an offline mode, and advanced users can also self-host the server anki-sync-server. For some reason (not going into the details here), I had to build my own Docker image for the server.

https://apps.ankiweb.net/
https://codeberg.org/snonux/docker-anki-sync-server

vault.buetow.cloud



Vaultwarden is an alternative implementation of the Bitwarden server API written in Rust and compatible with upstream Bitwarden clients, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal. So, this is a great password manager server which can be used with any Bitwarden Android app.

I currently don't use it, but I may in the future. I made it available in my ECS Fargate setup anyway for now.

https://github.com/dani-garcia/vaultwarden

I currently use geheim, a Ruby command line tool I wrote, as my current password manager. You can read a little bit about it here under "More":

Sweating the small stuff

bastion.buetow.cloud



This is a tiny ARM-based Amazon Linux EC2 instance, which I sometimes spin up for investigation or manual work on my EFS file system in AWS.

Conclusion



I have learned a lot about AWS and Terraform during this migration. This was actually my first AWS hands-on project with practical use.

All of this was not particularly difficult (but at times a bit confusing). I see the use of Terraform managing more extensive infrastructures (it was even helpful for my small setup here). At least I know now what all the buzz is about :-). I don't think Terraform's HCL is a nice language. It get's it's job done, but it could be more elegant IMHO.

Deploying updates to AWS are much easier, and some of the manual maintenance burdens of my Rocky Linux 9 VM are no longer needed. So I will have more time for other projects!

Will I keep it in the cloud? I don't know yet. But maybe I won't renew the buetow.cloud domain and instead will use *.cloud.buetow.org or *.aws.buetow.org subdomains.

Will the AWS setup be cheaper than my old Rocky Linux setup? It might be more affordable as I only turn ECS and the load balancers on or off on-demand. Time will tell! The first forecasts suggest that it will be around the same costs.

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
One reason why I love OpenBSD gemini://foo.zone/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi 2024-01-13T22:55:33+02:00 Paul Buetow aka snonux paul@dev.buetow.org HKISSFISHKISSFISHKISSFISHKISSFISH KISS

One reason why I love OpenBSD



Published at 2024-01-13T22:55:33+02:00

           FISHKISSFISHKIS               
       SFISHKISSFISHKISSFISH            F
    ISHK   ISSFISHKISSFISHKISS         FI
  SHKISS   FISHKISSFISHKISSFISS       FIS
HKISSFISHKISSFISHKISSFISHKISSFISH    KISS
  FISHKISSFISHKISSFISHKISSFISHKISS  FISHK
      SSFISHKISSFISHKISSFISHKISSFISHKISSF
  ISHKISSFISHKISSFISHKISSFISHKISSF  ISHKI
SSFISHKISSFISHKISSFISHKISSFISHKIS    SFIS
  HKISSFISHKISSFISHKISSFISHKISS       FIS
    HKISSFISHKISSFISHKISSFISHK         IS
       SFISHKISSFISHKISSFISH            K
         ISSFISHKISSFISHK               

I just upgraded my OpenBSD's from 7.3 to 7.4 by following the unattended upgrade guide:

https://www.openbsd.org/faq/upgrade74.html

$ doas installboot sd0 # Update the bootloader (not for every upgrade required)
$ doas sysupgrade # Update all binaries (including Kernel)

sysupgrade downloaded and upgraded to the next release and rebooted the system. After the reboot, I run:

$ doas sysmerge # Update system configuration files
$ doas pkg_add -u # Update all packages
$ doas reboot # Just in case, reboot one more time

That's it! Took me around 5 minutes in total! No issues, only these few comands, only 5 minutes! It just works! No problems, no conflicts, no tons (actually none) config file merge conflicts.

I followed the same procedure the previous times and never encountered any difficulties with any OpenBSD upgrades.

I have seen upgrades of other Operating Systems either take a long time or break the system (which takes manual steps to repair). That's just one of many reasons why I love OpenBSD! There appear never to be any problems. It just gets its job done!

The OpenBSD Project

BTW: are you looking for an opinionated OpenBSD VM hoster? OpenBSD Amsterdam may be for you. They rock (I am having a VM there, too)!

https://openbsd.amsterdam

E-Mail your comments to paul@nospam.buetow.org :-)

Other *BSD related posts are:

2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-04-01 KISS high-availability with OpenBSD
2024-01-13 One reason why I love OpenBSD (You are currently reading this)
2022-10-30 Installing DTail on OpenBSD
2022-07-30 Let's Encrypt with OpenBSD and Rex
2016-04-09 Jails and ZFS with Puppet on FreeBSD

Back to the main site
Site Reliability Engineering - Part 3: On-Call Culture gemini://foo.zone/gemfeed/2024-01-09-site-reliability-engineering-part-3.gmi 2024-01-09T18:35:48+02:00 Paul Buetow aka snonux paul@dev.buetow.org Welcome to Part 3 of my Site Reliability Engineering (SRE) series. I'm currently working as a Site Reliability Engineer, and I’m here to share what SRE is all about in this blog series.

Site Reliability Engineering - Part 3: On-Call Culture



Published at 2024-01-09T18:35:48+02:00

Welcome to Part 3 of my Site Reliability Engineering (SRE) series. I'm currently working as a Site Reliability Engineer, and I’m here to share what SRE is all about in this blog series.

2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture
2023-11-19 Site Reliability Engineering - Part 2: Operational Balance
2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture (You are currently reading this)
2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers

                    ..--""""----..                 
                 .-"   ..--""""--.j-.              
              .-"   .-"        .--.""--..          
           .-"   .-"       ..--"-. \/    ;         
        .-"   .-"_.--..--""  ..--'  "-.  :         
      .'    .'  /  `. \..--"" __ _     \ ;         
     :.__.-"    \  /        .' ( )"-.   Y          
     ;           ;:        ( )     ( ).  \         
   .':          /::       :            \  \        
 .'.-"\._   _.-" ; ;      ( )    .-.  ( )  \       
  "    `."""  .j"  :      :      \  ;    ;  \      
    bug /"""""/     ;      ( )    "" :.( )   \     
       /\    /      :       \         \`.:  _ \    
      :  `. /        ;       `( )     (\/ :" \ \   
       \   `.        :         "-.(_)_.'   t-'  ;  
        \    `.       ;                    ..--":  
         `.    `.     :              ..--""     :  
           `.    "-.   ;       ..--""           ;  
             `.     "-.:_..--""            ..--"   
               `.      :             ..--""        
                 "-.   :       ..--""              
                    "-.;_..--""                    


Putting Well-being First



Site Reliability Engineering is all about keeping systems reliable, but we often forget how important the human side is. A healthy on-call culture is just as crucial as any technical fix. The well-being of the engineers really matters.

First off, a healthy on-call rotation is about more than just handling incidents. It's about creating a supportive ecosystem. This means cutting down on pain points, offering mentorship, quickly iterating on processes, and making sure engineers have the right tools. But there's a catch—engineers need to be willing to learn. Especially in on-call rotations where SREs work with Software Engineers or QA Engineers, it can be tough to get everyone motivated. QA Engineers want to test, Software Engineers want to build new features; they don’t want to deal with production issues. This can be really frustrating for the SREs trying to mentor them.

Plus, measuring a good on-call experience isn't always clear-cut. You might think fewer pages mean a better on-call setup—and yeah, no one wants to get paged after hours—but it's not just about the number of pages. Trust, ownership, accountability, and solid communication are what really matter.

A key part is giving feedback about the on-call experience to keep learning and improving. If alerts are mostly noise, they need to be tweaked or even ditched. If alerts are helpful, can we automate the repetitive tasks? If there are knowledge gaps, is the documentation lacking? Regular retrospectives ensure that the systems get better over time and the on-call experience improves for the engineers.

Getting new team members ready for on-call duties is super important for keeping systems reliable and efficient. This means giving them the knowledge, tools, and support they need to handle incidents with confidence. It starts with a rundown of the system architecture and common issues, then training on monitoring tools, alerting systems, and incident response protocols. Watching experienced on-call engineers in action can provide some hands-on learning. Too often, though, new engineers get thrown into the deep end without proper onboarding because the more experienced engineers are too busy dealing with ongoing production issues.

A culture where everyone's always on and alert can cause burnout. Engineers need to know their limits, take breaks, and ask for help when they need it. This isn't just about personal health; a burnt-out engineer can drag down the whole team and the systems they manage. A good on-call culture keeps systems running while making sure engineers are happy, healthy, and supported. Experienced engineers should take the time to mentor juniors, but junior engineers should also stay engaged, investigate issues, and learn new things on their own.

For junior engineers, it's tempting to always ask the experts for help whenever something goes wrong. While that might seem reasonable, constantly handing out solutions doesn't scale—there are endless ways for production systems to break. So, every engineer needs to learn how to debug, troubleshoot, and resolve incidents on their own. The experts should be there for guidance and can step in when a junior gets really stuck, but they also need to give space for less experienced engineers to grow and learn.

A blameless on-call culture is essential for creating a safe and collaborative environment where engineers can handle incidents without worrying about getting blamed. It recognizes that mistakes are just part of learning and innovating. When people know they won’t be punished for errors, they’re more likely to talk openly about what went wrong, which helps the whole team learn and improve. Plus, a blameless culture boosts psychological safety, job satisfaction, and reduces burnout, keeping everyone committed and engaged.

Mistakes are gonna happen, which is why having a blameless on-call culture is so important.

Continue with the fourth part of this series:

2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
Bash Golf Part 3 gemini://foo.zone/gemfeed/2023-12-10-bash-golf-part-3.gmi 2023-12-10T11:35:54+02:00 Paul Buetow aka snonux paul@dev.buetow.org This is the third blog post about my Bash Golf series. This series is random Bash tips, tricks, and weirdnesses I have encountered over time.

Bash Golf Part 3



Published at 2023-12-10T11:35:54+02:00

This is the third blog post about my Bash Golf series. This series is random Bash tips, tricks, and weirdnesses I have encountered over time.

2021-11-29 Bash Golf Part 1
2022-01-01 Bash Golf Part 2
2023-12-10 Bash Golf Part 3 (You are currently reading this)

    '\       '\        '\                   .  .          |>18>>
      \        \         \              .         ' .     |
     O>>      O>>       O>>         .                 'o  |
      \       .\. ..    .\. ..   .                        |
      /\    .  /\     .  /\    . .                        |
     / /   .  / /  .'.  / /  .'    .                      |
jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                        Art by Joan Stark, mod. by Paul Buetow

Table of Contents




FUNCNAME



FUNCNAME is an array you are looking for a way to dynamically determine the name of the current function (which could be considered the callee in the context of its own execution), you can use the special variable FUNCNAME. This is an array variable that contains the names of all shell functions currently in the execution call stack. The element FUNCNAME[0] holds the name of the currently executing function, FUNCNAME[1] the name of the function that called that, and so on.

This is particularly useful for logging when you want to include the callee function in the log output. E.g. look at this log helper:

#!/usr/bin/env bash

log () {
    local -r level="$1"; shift
    local -r message="$1"; shift
    local -i pid="$$"

    local -r callee=${FUNCNAME[1]}
    local -r stamp=$(date +%Y%m%d-%H%M%S)

    echo "$level|$stamp|$pid|$callee|$message" >&2
}

at_home_friday_evening () {
    log INFO 'One Peperoni Pizza, please'
}

at_home_friday_evening

The output is as follows:

❯ ./logexample.sh
INFO|20231210-082732|123002|at_home_friday_evening|One Peperoni Pizza, please

:(){ :|:& };:



This one may be widely known already, but I am including it here as I found a cute image illustrating it. But to break :(){ :|:& };: down:

  • :(){ } is really a declaration of the function :
  • The ; is ending the current statement
  • The : at the end is calling the function :
  • :|:& is the function body

Let's break down the function body :|:&:

  • The first : is calling the function recursively
  • The |: is piping the output to the function : again (parallel recursion)
  • The & lets it run in the background.

So, it's a fork bomb. If you run it, your computer will run out of resources eventually. (Modern Linux distributions could have reasonable limits configured for your login session, so it won't bring down your whole system anymore unless you run it as root!)

And here is the cute illustration:

Bash fork bomb

Inner functions



Bash defines variables as it is interpreting the code. The same applies to function declarations. Let's consider this code:

#!/usr/bin/env bash

outer() {
  inner() {
    echo 'Intel inside!'
  }
  inner
}

inner
outer
inner

And let's execute it:

❯ ./inner.sh
/tmp/inner.sh: line 10: inner: command not found
Intel inside!
Intel inside!

What happened? The first time inner was called, it wasn't defined yet. That only happens after the outer run. Note that inner will still be globally defined. But functions can be declared multiple times (the last version wins):

#!/usr/bin/env bash

outer1() {
  inner() {
    echo 'Intel inside!'
  }
  inner
}

outer2() {
  inner() {
    echo 'Wintel inside!'
  }
  inner
}

outer1
inner
outer2
inner

And let's run it:

❯ ./inner2.sh
Intel inside!
Intel inside!
Wintel inside!
Wintel inside!

Exporting functions



Have you ever wondered how to execute a shell function in parallel through xargs? The problem is that this won't work:

#!/usr/bin/env bash

some_expensive_operations() {
  echo "Doing expensive operations with '$1' from pid $$"
}

for i in {0..9}; do echo $i; done \
  | xargs -P10 -I{} bash -c 'some_expensive_operations "{}"'

We try here to run ten parallel processes; each of them should run the some_expensive_operations function with a different argument. The arguments are provided to xargs through STDIN one per line. When executed, we get this:

❯ ./xargs.sh
bash: line 1: some_expensive_operations: command not found
bash: line 1: some_expensive_operations: command not found
bash: line 1: some_expensive_operations: command not found
bash: line 1: some_expensive_operations: command not found
bash: line 1: some_expensive_operations: command not found
bash: line 1: some_expensive_operations: command not found
bash: line 1: some_expensive_operations: command not found
bash: line 1: some_expensive_operations: command not found
bash: line 1: some_expensive_operations: command not found
bash: line 1: some_expensive_operations: command not found

There's an easy solution for this. Just export the function! It will then be magically available in any sub-shell!

#!/usr/bin/env bash

some_expensive_operations() {
  echo "Doing expensive operations with '$1' from pid $$"
}
export -f some_expensive_operations

for i in {0..9}; do echo $i; done \
  | xargs -P10 -I{} bash -c 'some_expensive_operations "{}"'

When we run this now, we get:

❯ ./xargs.sh
Doing expensive operations with '0' from pid 132831
Doing expensive operations with '1' from pid 132832
Doing expensive operations with '2' from pid 132833
Doing expensive operations with '3' from pid 132834
Doing expensive operations with '4' from pid 132835
Doing expensive operations with '5' from pid 132836
Doing expensive operations with '6' from pid 132837
Doing expensive operations with '7' from pid 132838
Doing expensive operations with '8' from pid 132839
Doing expensive operations with '9' from pid 132840

If some_expensive_function would call another function, the other function must also be exported. Otherwise, there will be a runtime error again. E.g., this won't work:

#!/usr/bin/env bash

some_other_function() {
  echo "$1"
}

some_expensive_operations() {
  some_other_function "Doing expensive operations with '$1' from pid $$"
}
export -f some_expensive_operations

for i in {0..9}; do echo $i; done \
  | xargs -P10 -I{} bash -c 'some_expensive_operations "{}"'

... because some_other_function isn't exported! You will also need to add an export -f some_other_function!

Dynamic variables with local



You may know that local is how to declare local variables in a function. Most don't know that those variables actually have dynamic scope. Let's consider the following example:

#!/usr/bin/env bash

foo() {
  local foo=bar # Declare local/dynamic variable
  bar
  echo "$foo"
}

bar() {
  echo "$foo"
  foo=baz
}

foo=foo # Declare global variable
foo # Call function foo
echo "$foo"

Let's pause a minute. What do you think the output would be?

Let's run it:

❯ ./dynamic.sh
bar
baz
foo

What happened? The variable foo (declared with local) is available in the function it was declared in and in all other functions down the call stack! We can even modify the value of foo, and the change will be visible up the call stack. It's not a global variable; on the last line, echo "$foo" echoes the global variable content.


if conditionals



Consider all variants here more or less equivalent:

#!/usr/bin/env bash

declare -r foo=foo
declare -r bar=bar

if [ "$foo" = foo ]; then
  if [ "$bar" = bar ]; then
    echo ok1
  fi
fi

if [ "$foo" = foo ] && [ "$bar" == bar ]; then
  echo ok2a
fi

[ "$foo" = foo ] && [ "$bar" == bar ] && echo ok2b

if [[ "$foo" = foo && "$bar" == bar ]]; then
  echo ok3a
fi

 [[ "$foo" = foo && "$bar" == bar ]] && echo ok3b

if test "$foo" = foo && test "$bar" = bar; then
  echo ok4a
fi

test "$foo" = foo && test "$bar" = bar && echo ok4b

The output we get is:

❯ ./if.sh
ok1
ok2a
ok2b
ok3a
ok3b
ok4a
ok4b

Multi-line comments



You all know how to comment. Put a # in front of it. You could use multiple single-line comments or abuse heredocs and redirect it to the : no-op command to emulate multi-line comments.

#!/usr/bin/env bash

# Single line comment

# These are two single line
# comments one after another

: <<COMMENT
This is another way a
multi line comment
could be written!
COMMENT

I will not demonstrate the execution of this script, as it won't print anything! It's obviously not the most pretty way of commenting on your code, but it could sometimes be handy!

Don't change it while it's executed



Consider this script:

#!/usr/bin/env bash

echo foo
echo echo baz >> $0
echo bar

When it is run, it will do:

❯ ./if.sh
foo
bar
baz
❯ cat if.sh
#!/usr/bin/env bash

echo foo
echo echo baz >> $0
echo bar
echo baz

So what happened? The echo baz line was appended to the script while it was still executed! And the interpreter also picked it up! It tells us that Bash evaluates each line as it encounters it. This can lead to nasty side effects when editing the script while it is still being executed! You should always keep this in mind!

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2023-12-10 Bash Golf Part 3 (You are currently reading this)
2022-01-01 Bash Golf Part 2
2021-11-29 Bash Golf Part 1
2021-06-05 Gemtexter - One Bash script to rule it all
2021-05-16 Personal Bash coding style guide

Back to the main site
Site Reliability Engineering - Part 2: Operational Balance gemini://foo.zone/gemfeed/2023-11-19-site-reliability-engineering-part-2.gmi 2023-11-19T00:18:18+03:00 Paul Buetow aka snonux paul@dev.buetow.org This is the second part of my Site Reliability Engineering (SRE) series. I am currently employed as a Site Reliability Engineer and will try to share what SRE is about in this blog series.

Site Reliability Engineering - Part 2: Operational Balance



Published at 2023-11-19T00:18:18+03:00

This is the second part of my Site Reliability Engineering (SRE) series. I am currently employed as a Site Reliability Engineer and will try to share what SRE is about in this blog series.

2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture
2023-11-19 Site Reliability Engineering - Part 2: Operational Balance (You are currently reading this)
2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture
2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⣾⣷⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⣾⠿⠿⠿⠶⠾⠿⠿⣿⣿⣿⣿⣿⣿⠿⠿⠶⠶⠿⠿⠿⣷⠀⠀⠀⠀
⠀⠀⠀⣸⢿⣆⠀⠀⠀⠀⠀⠀⠀⠙⢿⡿⠉⠀⠀⠀⠀⠀⠀⠀⣸⣿⡆⠀⠀⠀
⠀⠀⢠⡟⠀⢻⣆⠀⠀⠀⠀⠀⠀⠀⣾⣧⠀⠀⠀⠀⠀⠀⠀⣰⡟⠀⢻⡄⠀⠀
⠀⢀⣾⠃⠀⠀⢿⡄⠀⠀⠀⠀⠀⢠⣿⣿⡀⠀⠀⠀⠀⠀⢠⡿⠀⠀⠘⣷⡀⠀
⠀⣼⣏⣀⣀⣀⣈⣿⡀⠀⠀⠀⠀⣸⣿⣿⡇⠀⠀⠀⠀⢀⣿⣃⣀⣀⣀⣸⣧⠀
⠀⢻⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⣿⣿⣿⣿⠀⠀⠀⠀⠈⢿⣿⣿⣿⣿⣿⡿⠀
⠀⠀⠉⠛⠛⠛⠋⠁⠀⠀⠀⠀⢸⣿⣿⣿⣿⡆⠀⠀⠀⠀⠈⠙⠛⠛⠛⠉⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⣿⣿⣿⣿⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣾⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⣿⣿⣿⣿⣿⣆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠴⠶⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠶⠦⠀⠀

Striking the Right Balance Between Reliability and Speed



Site Reliability Engineering is more than just a bunch of best practices or methods. It's a guiding light for engineering teams, helping them navigate the tricky waters of modern software development and system management.
In the world of software production, there are two big forces that often clash: the push for fast feature releases (velocity) and the need for reliable systems. Traditionally, moving faster meant more risk. SRE helps balance these opposing goals with things like error budgets and SLIs/SLOs. These tools give teams a clear way to measure how much they can push changes without hurting system health. So, the error budget becomes a balancing act, helping teams trade off between innovation and reliability.

Finding the right balance in SRE means juggling operations and coding. Ideally, engineers should split their time 50/50 between these tasks. This isn't just a random rule; it highlights how much SRE values both maintaining smooth operations and driving innovation. This way, SREs not only handle today's problems but also prepare for tomorrow's challenges.

But not all operations tasks are the same. SRE makes a clear distinction between "ops work" and "toil." Ops work is essential for maintaining systems and adds value, while toil is the repetitive, boring stuff that doesn’t. It's super important to recognize and minimize toil because a culture that lets engineers get bogged down in it will kill innovation and growth. The way an organization handles toil says a lot about its operational health and commitment to balance.

A key part of finding operational balance is the tools and processes that SREs use. Great monitoring and observability tools, especially those that can handle lots of complex data, are essential. This isn’t just about having the right tech—it shows that the organization values proactive problem-solving. With systems that can spot potential issues early, SREs can keep things stable while still pushing forward.

Operational balance isn't just about tech or processes; it's also about people. The well-being of on-call engineers is just as important as the health of the services they manage. Doing postmortems after incidents, having continuous feedback loops, and identifying gaps in tools, skills, or resources all help make sure the human side of operations gets the attention it deserves.

In the end, finding operational balance in SRE is an ongoing journey, not a one-time thing. Companies need to keep reassessing their practices, tools, and especially their culture. When they get this balance right, they can keep innovating without sacrificing the reliability of their systems, leading to long-term success.

That all sounds pretty idealistic. The reality is that getting the perfect balance is really tough. No system is ever going to be perfect. But hey, we should still strive for it!

Continue with the third part of this series:

2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
'Mind Management' book notes gemini://foo.zone/gemfeed/2023-11-11-mind-management-book-notes.gmi 2023-11-11T22:21:47+02:00 Paul Buetow aka snonux paul@dev.buetow.org These are my personal takeaways after reading 'Mind Management' by David Kadavy. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

"Mind Management" book notes



Published at 2023-11-11T22:21:47+02:00

These are my personal takeaways after reading "Mind Management" by David Kadavy. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

         ,..........   ..........,
     ,..,'          '.'          ',..,
    ,' ,'            :            ', ',
   ,' ,'             :             ', ',
  ,' ,'              :              ', ',
 ,' ,'............., : ,.............', ',
,'  '............   '.'   ............'  ',
 '''''''''''''''''';''';''''''''''''''''''
                    '''

Table of Contents




It's not about time management



Productivity isn't about time management - it's about mind management. When you put a lot of effort into something, there are:

  • The point of diminishing returns
  • The point of negative return

Empty slots in the calendar



If we do more things in less time and use all possible slots, speed read, etc., we are more productive. But in reality, that's not the entire truth. You also exchange one thing against everything else.... You cut out too much from your actual life.

When you safe time...



...keep it.

  • stare out of the window; that's good for you.
  • Creative thinking needs space. It will pay dividends tomorrow.
  • You will be rewarded with the "Eureka effect" - a sudden new insight.

Follow your mood



Ask yourself: what is my mood now? We never have the energy to do anything, so the better strategy is to follow your current mode and energy. E.g.:

  • Didn't sleep enough today? Then, do simple, non-demanding tasks at work
  • Had a great sleep, and there is even time before work starts? Pull in a workout...

Boosting creativity



The morning without coffee is a gift for creativity, but you often get distracted. Minimize distractions, too. I have no window to stare out but a plain blank wall.

  • The busier you are, the less creative you will be.
  • Event time (divergent thinking) vs clock time (convergent thinking)
  • Don't race with time but walk alongside it as rough time lines.
  • Don't judge every day after the harvest, but the seed you lay

The right mood for the task at hand



We need to try many different combinations. Limiting ourselves and trying too hard makes us frustrated and burn out. Creativity requires many iterations.

I can only work according to my available brain power.

I can also change my mood according to what needs improvement. Just imagine the last time you were in that mood and then try to get into it. It can take several tries to hit a working mood. Try to replicate that mental state. This can also be by location or by another habit, e.g. by a beer.

Once you are in a mental state, don't try to change it. It will take a while for your brain to switch to a completely different state.

Week of want. For a week, only do what you want and not what you must do. Your ideas will get much more expansive.

It gives you pleasure and is in a good mood. This increases creativity if you do what you want to do.

Creativity hacks



  • Coffee can cause anxiety.
  • Take phentermine with coffee to take off the edge and have a relaxed focus
  • Green tea, which tastes sweet plus supplement boost.
  • Also wine. But be careful with alcohol. Don't drink a whole bottle.
  • Have a machine without distractions and internet access for writing.
  • Go to open spaces for creativity.
  • Go to closed spaces for polishing.

Planning and strategizing



Minds work better in sprints and not in marathons. Have a weekly plan, not a daily one.

  • Alternating incubation to avoid blocks.
  • Build on systems that use chaos for growth, e.g. unplanned disasters.
  • Things don't go after the plan is the plan. Be anti-fragile.

Organize by mental state. In the time management context, the mental state doesn't exist. You schedule as many things as possible by project. In the mind management context, mental state is everything. You could prepare by mental state and not by assignment.

You could schedule exploratory tasks when you are under grief. Sound systems should create slack for creativity. Plan only for a few minutes.

Fake it until you make it.



  • E.g. act calm if you want to be calm.
  • Talk slowly and deepen your voice a bit to appear more confident. You will also become more confident.
  • Also, use power positions for better confidence.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2025-06-07 "A Monk's Guide to Happiness" book notes
2025-04-19 "When: The Scientific Secrets of Perfect Timing" book notes
2024-10-24 "Staff Engineer" book notes
2024-07-07 "The Stoic Challenge" book notes
2024-05-01 "Slow Productivity" book notes
2023-11-11 "Mind Management" book notes (You are currently reading this)
2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes
2023-05-06 "The Obstacle is the Way" book notes
2023-04-01 "Never split the difference" book notes
2023-03-16 "The Pragmatic Programmer" book notes

Back to the main site
KISS static web photo albums with `photoalbum.sh` gemini://foo.zone/gemfeed/2023-10-29-kiss-static-web-photo-albums-with-photoalbum.sh.gmi 2023-10-29T22:25:04+02:00 Paul Buetow aka snonux paul@dev.buetow.org Once in a while, I share photos on the inter-web with either family and friends or on my The Irregular Ninja photo site. One hobby of mine is photography (even though I don't have enough time for it - so I am primarily a point-and-shoot photographer).

KISS static web photo albums with photoalbum.sh



Published at 2023-10-29T22:25:04+02:00

Once in a while, I share photos on the inter-web with either family and friends or on my The Irregular Ninja photo site. One hobby of mine is photography (even though I don't have enough time for it - so I am primarily a point-and-shoot photographer).

I'm not particularly eager to use any photo social sharing platforms such as Flickr, 500px (I used them regularly in the past), etc., anymore. I value self-hosting, DIY and privacy (nobody should data mine my photos), and no third party should have any rights to my pictures.

I value KISS (keep it simple and stupid) and simplicity. All that's required for a web photo album is some simple HTML and spice it up with CSS. No need for JavaScript, no need for a complex dynamic website.

         ___        .---------.._
  ______!fsc!_....-' .g8888888p. '-------....._
.'          //     .g8:       :8p..---....___ \'.
| foo.zone //  ()  d88:       :88b|==========! !|
|         //       888:       :888|==========| !|
|___      \\_______'T88888888888P''----------'//|   
|   \       """"""""""""""""""""""""""""""""""/ |   
|    !...._____      .="""=.   .[]    ____...!  |   
|   /               ! .g$p. !   .[]          :  |   
|  !               :  $$$$$  :  .[]          :  |   
|  !irregular.ninja ! 'T$P' !   .[]           '.|   
|   \__              "=._.="   .()        __    |   
|.--'  '----._______________________.----'  '--.|
'._____________________________________________.'   

Table of Contents




Introducing photoalbum.sh



photoalbum.sh is a minimal Bash (Bourne Again Shell) script for Unix-like operating systems (such as Linux) to generate static web photo albums. The resulting static photo album is pure HTML+CSS (without any JavaScript!). It is specially designed to be as simple as possible.

Installation



Installation is straightforward. All required is a recent version of GNU Bash, GNU Make, Git and ImageMagick. On Fedora, the dependencies are installed with:

% sudo dnf install -y ImageMagick make git

Now, clone, make and install the script:

% git clone https://codeberg.org/snonux/photoalbum
Cloning into 'photoalbum'...
remote: Enumerating objects: 1624, done.
remote: Total 1624 (delta 0), reused 0 (delta 0), pack-reused 1624
Receiving objects: 100% (1624/1624), 193.36 KiB | 1.49 MiB/s, done.
Resolving deltas: 100% (1227/1227), done.

% cd photoalbum
/home/paul/photoalbum

% make
cut -d' ' -f2 changelog | head -n 1 | sed 's/(//;s/)//' > .version
test ! -d ./bin && mkdir ./bin || exit 0
sed "s/PHOTOALBUMVERSION/$(cat .version)/" src/photoalbum.sh > ./bin/photoalbum
chmod 0755 ./bin/photoalbum

% sudo make install
test ! -d /usr/bin && mkdir -p /usr/bin || exit 0
cp ./bin/* /usr/bin
test ! -d /usr/share/photoalbum/templates && mkdir -p /usr/share/photoalbum/templates || exit 0
cp -R ./share/templates /usr/share/photoalbum/
test ! -d /etc/default && mkdir -p /etc/default || exit 0
cp ./src/photoalbum.default.conf /etc/default/photoalbum

You should now have the photoalbum command in your $PATH. But wait to use it! First, it needs to be set up!

% photoalbum version
This is Photoalbum Version 0.5.1

Setting it up



Now, it's time to set up the Irregular Ninja static web photo album (or any other web photo album you may be setting up!)! Create a directory (here: irregular.ninja for the Irregular Ninja Photo site - or any oter sub-directory reflecting your album's name), and inside of that directory, create an incoming directory. The incoming directory. Copy all photos to be part of the album there.

% mkdir irregular.ninja
% cd irregular.ninja
% # cp -Rpv ~/Photos/your-photos ./incoming

In this example, I am skipping the cp ... part as I intend to use an alternative incoming directory, as you will see later in the configuration file.

The general usage of potoalbum is as follows:

photoalbum clean|generate|version [rcfile] photoalbum
photoalbum makemake

Whereas:

  • clean: Cleans up the workspace
  • generate: Generates the static photo album
  • version: Prints out the version
  • makemake: Creates a Makefile and photoalbumrc in the current working directory.

So what we will do next is to run the following inside of the irregular.ninja/ directory; it will generate a Makefile and a configuration file photoalbumrc containing a few configurable options:

% photoalbum makemake
You may now customize ./photoalbumrc and run make

% cat Makefile
all:
	photoalbum generate photoalbumrc
clean:
	photoalbum clean photoalbumrc

% cat photoalbumrc
# The title of the photoalbum
TITLE='A simple Photoalbum'

# Thumbnail height geometry
THUMBHEIGHT=300
# Normal geometry height (when viewing photo). Uncomment, to keep original size.
HEIGHT=1200
# Max previews per page.
MAXPREVIEWS=40
# Randomly shuffle all previews.
# SHUFFLE=yes

# Diverse directories, need to be full paths, not relative!
INCOMING_DIR=$(pwd)/incoming
DIST_DIR=$(pwd)/dist
TEMPLATE_DIR=/usr/share/photoalbum/templates/default
#TEMPLATE_DIR=/usr/share/photoalbum/templates/minimal

# Includes a .tar of the incoming dir in the dist, can be yes or no
TARBALL_INCLUDE=yes
TARBALL_SUFFIX=.tar
TAR_OPTS='-c'

# Some debugging options
#set -e
#set -x

In the case for irregular.ninja, I changed the defaults to the following:

--- photoalbumrc        2023-10-29 21:42:00.894202045 +0200
+++ photoalbumrc.new 2023-06-04 10:40:08.030994440 +0300
@@ -1,23 +1,24 @@
 # The title of the photoalbum
-TITLE='A simple Photoalbum'
+TITLE='Irregular.Ninja'

 # Thumbnail height geometry
-THUMBHEIGHT=300
+THUMBHEIGHT=400
 # Normal geometry height (when viewing photo). Uncomment, to keep original size.
-HEIGHT=1200
+HEIGHT=1800
 # Max previews per page.
 MAXPREVIEWS=40
-# Randomly shuffle all previews.
-# SHUFFLE=yes
+# Randomly shuffle
+SHUFFLE=yes

 # Diverse directories, need to be full paths, not relative!
-INCOMING_DIR=$(pwd)/incoming
+INCOMING_DIR=~/Nextcloud/Photos/irregular.ninja
 DIST_DIR=$(pwd)/dist
 TEMPLATE_DIR=/usr/share/photoalbum/templates/default
 #TEMPLATE_DIR=/usr/share/photoalbum/templates/minimal

 # Includes a .tar of the incoming dir in the dist, can be yes or no
-TARBALL_INCLUDE=yes
+TARBALL_INCLUDE=no
 TARBALL_SUFFIX=.tar
 TAR_OPTS='-c'

So I changed the album title, adjusted some image and thumbnail dimensions, and I want all images to be randomly shuffled every time the album is generated! I also have all my photos in my Nextcloud Photo directory and don't want to copy them to the local incoming directory. Also, a tarball containing the whole album as a download isn't provided.

Generating the static photo album



Let's generate it. Depending on the image sizes and count, the following step may take a while.

% make
photoalbum generate photoalbumrc
Processing 1055079_cool-water-wallpapers-hd-hd-desktop-wal.jpg to /home/paul/irregular.ninja/dist/photos/1055079_cool-water-wallpapers-hd-hd-desktop-wal.jpg
Processing 11271242324.jpg to /home/paul/irregular.ninja/dist/photos/11271242324.jpg
Processing 11271306683.jpg to /home/paul/irregular.ninja/dist/photos/11271306683.jpg
Processing 13950707932.jpg to /home/paul/irregular.ninja/dist/photos/13950707932.jpg
Processing 14077406487.jpg to /home/paul/irregular.ninja/dist/photos/14077406487.jpg
Processing 14859380100.jpg to /home/paul/irregular.ninja/dist/photos/14859380100.jpg
Processing 14869239578.jpg to /home/paul/irregular.ninja/dist/photos/14869239578.jpg
Processing 14879132910.jpg to /home/paul/irregular.ninja/dist/photos/14879132910.jpg
.
.
.
Generating /home/paul/irregular.ninja/dist/html/7-4.html
Creating thumb /home/paul/irregular.ninja/dist/thumbs/20211130_091051.jpg
Creating blur /home/paul/irregular.ninja/dist/blurs/20211130_091051.jpg
Generating /home/paul/irregular.ninja/dist/html/page-7.html
Generating /home/paul/irregular.ninja/dist/html/7-5.html
Generating /home/paul/irregular.ninja/dist/html/7-5.html
Generating /home/paul/irregular.ninja/dist/html/7-5.html
Creating thumb /home/paul/irregular.ninja/dist/thumbs/DSCF0188.JPG
Creating blur /home/paul/irregular.ninja/dist/blurs/DSCF0188.JPG
Generating /home/paul/irregular.ninja/dist/html/page-7.html
Generating /home/paul/irregular.ninja/dist/html/7-6.html
Generating /home/paul/irregular.ninja/dist/html/7-6.html
Generating /home/paul/irregular.ninja/dist/html/7-6.html
Creating thumb /home/paul/irregular.ninja/dist/thumbs/P3500897-01.jpg
Creating blur /home/paul/irregular.ninja/dist/blurs/P3500897-01.jpg
.
.
.
Generating /home/paul/irregular.ninja/dist/html/8-0.html
Generating /home/paul/irregular.ninja/dist/html/8-41.html
Generating /home/paul/irregular.ninja/dist/html/9-0.html
Generating /home/paul/irregular.ninja/dist/html/9-41.html
Generating /home/paul/irregular.ninja/dist/html/index.html
Generating /home/paul/irregular.ninja/dist/.//index.html

The result will be in the distribution directory ./dist. This directory is publishable to the inter-web:

% ls ./dist
blurs  html  index.html  photos  thumbs

I usually do that via rsync to my web server (I use OpenBSD with the standard httpd web server, btw.), which is as simple as:

% rsync --delete -av ./dist/. admin@blowfish.buetow.org:/var/www/htdocs/irregular.ninja/

Have a look at the end result here:

https://irregular.ninja

PS: There's also a server-side synchronisation script mirroring the same content to another server for high availability reasons (out of scope for this blog post).

Cleaning it up



A simple make clean will clean up the ./dist directory and all other (if any) temp files created.

HTML templates



Poke around in this source directory. You will find a bunch of Bash-HTML template files. You could tweak them to your liking.

Conclusion



A decent looking (in my opinion, at least) in less than 500 (273 as of this writing, to be precise) lines of Bash code and with minimal dependencies; what more do you want? How many LOCs would this be in Raku with the same functionality (can it be sub-100?).

Also, I like the CSS effects which I recently added. In particular, for the Irregular Ninja site, I randomly shuffled the CSS effects you see. The background blur images are the same but rotated 180 degrees and blurred out.

photoalbum.sh source code on Codeberg.

E-Mail your comments to paul@nospam.buetow.org :-)

Other Bash and KISS-related posts are:

2024-04-01 KISS high-availability with OpenBSD
2023-12-10 Bash Golf Part 3
2023-10-29 KISS static web photo albums with photoalbum.sh (You are currently reading this)
2023-06-01 KISS server monitoring with Gogios
2022-01-01 Bash Golf Part 2
2021-11-29 Bash Golf Part 1
2021-09-12 Keep it simple and stupid
2021-06-05 Gemtexter - One Bash script to rule it all
2021-05-16 Personal Bash coding style guide

Back to the main site
DTail usage examples gemini://foo.zone/gemfeed/2023-09-25-dtail-usage-examples.gmi 2023-09-25T14:57:42+03:00 Paul Buetow aka snonux paul@dev.buetow.org Hey there. As I am pretty busy this month personally (I am now on Paternity Leave) and as I still want to post once monthly, the blog post of this month will only be some DTail usage examples. They're from the DTail documentation, but not all readers of my blog may be aware of those!

DTail usage examples



Published at 2023-09-25T14:57:42+03:00

Hey there. As I am pretty busy this month personally (I am now on Paternity Leave) and as I still want to post once monthly, the blog post of this month will only be some DTail usage examples. They're from the DTail documentation, but not all readers of my blog may be aware of those!

DTail is a distributed DevOps tool for tailing, grepping, catting logs and other text files on many remote machines at once which I programmed in Go.

https://dtail.dev

                              ,_---~~~~~----._
                        _,,_,*^____      _____``*g*\"*,
  ____ _____     _ _   / __/ /'     ^.  /      \ ^@q   f
 |  _ \_   _|_ _(_) |   @f |      ((@|  |@))    l  0 _/
 | | | || |/ _` | | |  \`/   \~____ / __ \_____/    \
 | |_| || | (_| | | |   |           _l__l_           I
 |____/ |_|\__,_|_|_|   }          [______]           I
                        ]            | | |            |
                        ]             ~ ~             |
                        |   Let's tail those logs!   |
                         |                           |

Table of Contents




Commands



DTail consists out of a server and several client binaries. In this post, I am showcasing their use!

  • Use dtail to follow logs
  • Use dtail to aggregate logs while they are followed
  • Use dcat to display logs and other text files already written
  • Use dgrep to grep (search) logs and other text files already written
  • Use dmap to aggregate logs and other text files already written
  • dserver is the DTail server, where all the clients can connect to

Following logs



The following example demonstrates how to follow logs of several servers at once. The server list is provided as a flat text file. The example filters all records containing the string INFO. Any other Go compatible regular expression can also be used instead of INFO.

% dtail --servers serverlist.txt --grep INFO --files "/var/log/dserver/*.log"

Hint: you can also provide a comma separated server list, e.g.: servers server1.example.org,server2.example.org:PORT,...

Tail example

Hint: You can also use the shorthand version (omitting the --files)

% dtail --servers serverlist.txt --grep INFO "/var/log/dserver/*.log"

Aggregating logs



To run ad-hoc map-reduce aggregations on newly written log lines you must add a query. The following example follows all remote log lines and prints out every few seconds the result to standard output.

Hint: To run a map-reduce query across log lines written in the past, please use the dmap command instead.

% dtail --servers serverlist.txt \
    --files '/var/log/dserver/*.log' \
    --query 'from STATS select sum($goroutines),sum($cgocalls),
             last($time),max(lifetimeConnections)'

Beware: For map-reduce queries to work, you have to ensure that DTail supports your log format. Check out the documentaiton of the DTail query language and the DTail log formats on the DTail homepage for more information.

Tail map-reduce example

Hint: You can also use the shorthand version:

% dtail --servers serverlist.txt \
    --files '/var/log/dserver/*.log' \
    'from STATS select sum($goroutines),sum($cgocalls),
     last($time),max(lifetimeConnections)'

Here is another example:

% dtail --servers serverlist.txt \
    --files '/var/log/dserver/*.log' \
    --query 'from STATS select $hostname,max($goroutines),max($cgocalls),$loadavg,
             lifetimeConnections group by $hostname order by max($cgocalls)'

Tail map-reduce example 2

You can also continuously append the results to a CSV file by adding outfile append filename.csv to the query:

% dtail --servers serverlist.txt \
    --files '/var/log/dserver/*.log' \
    --query 'from STATS select ... outfile append result.csv'

How to use dcat



The following example demonstrates how to cat files (display the full content of the files) on several servers at once.

As you can see in this example, a DTail client also creates a local log file of all received data in ~/log. You can also use the noColor and -plain flags (this all also work with other DTail commands than dcat).

% dcat --servers serverlist.txt --files /etc/hostname

Cat example

Hint: You can also use the shorthand version:

% dcat --servers serverlist.txt /etc/hostname

How to use dgrep



The following example demonstrates how to grep files (display only the lines which match a given regular expression) of multiple servers at once. In this example, we look after some entries in /etc/passwd. This time, we don't provide the server list via an file but rather via a comma separated list directly on the command line. We also explore the -before, -after and -max flags (see animation).

% dgrep --servers server1.example.org:2223 \
    --files /etc/passwd \
    --regex nologin

Generally, dgrep is also a very useful way to search historic application logs for certain content.

Grep example

Hint: -regex is an alias for -grep.

How to use dmap



To run a map-reduce aggregation over logs written in the past, the dmap command can be used. The following example aggregates all map-reduce fields dmap will print interim results every few seconds. You can also write the result to an CSV file by adding outfile result.csv to the query.

% dmap --servers serverlist.txt \
    --files '/var/log/dserver/*.log' \
    --query 'from STATS select $hostname,max($goroutines),max($cgocalls),$loadavg,
             lifetimeConnections group by $hostname order by max($cgocalls)'

Remember: For that to work, you have to make sure that DTail supports your log format. You can either use the ones already defined in internal/mapr/logformat or add an extension to support a custom log format. The example here works out of the box though, as DTail understands its own log format already.

DMap example

How to use the DTail serverless mode



Until now, all examples so far required to have remote server(s) to connect to. That makes sense, as after all DTail is a *distributed* tool. However, there are circumstances where you don't really need to connect to a server remotely. For example, you already have a login shell open to the server an all what you want is to run some queries directly on local log files.

The serverless mode does not require any dserver up and running and therefore there is no networking/SSH involved.

All commands shown so far also work in a serverless mode. All what needs to be done is to omit a server list. The DTail client then starts in serverless mode.

Serverless map-reduce query



The following dmap example is the same as the previously shown one, but the difference is that it operates on a local log file directly:

% dmap --files /var/log/dserver/dserver.log
    --query 'from STATS select $hostname,max($goroutines),max($cgocalls),$loadavg,
              lifetimeConnections group by $hostname order by max($cgocalls)'

As a shorthand version the following command can be used:

% dmap 'from STATS select $hostname,max($goroutines),max($cgocalls),$loadavg,
        lifetimeConnections group by $hostname order by max($cgocalls)' \
        /var/log/dsever/dserver.log

You can also use a file input pipe as follows:

% cat /var/log/dserver/dserver.log | \
    dmap 'from STATS select $hostname,max($goroutines),max($cgocalls),$loadavg,
          lifetimeConnections group by $hostname order by max($cgocalls)'

Aggregating CSV files



In essence, this works exactly like aggregating logs. All files operated on must be valid CSV files and the first line of the CSV must be the header. E.g.:

% cat example.csv
name,lastname,age,profession
Michael,Jordan,40,Basketball player
Michael,Jackson,100,Singer
Albert,Einstein,200,Physician
% dmap --query 'select lastname,name where age > 40 logformat csv outfile result.csv' example.csv
% cat result.csv
lastname,name
Jackson,Michael
Einstein,Albert

DMap can also be used to query and aggregate CSV files from remote servers.

Other serverless commands



The serverless mode works transparently with all other DTail commands. Here are some examples:

% dtail /var/log/dserver/dserver.log

% dtail --logLevel trace /var/log/dserver/dserver.log

% dcat /etc/passwd

% dcat --plain /etc/passwd > /etc/test
# Should show no differences.
diff /etc/test /etc/passwd 

% dgrep --regex ERROR --files /var/log/dserver/dsever.log

% dgrep --before 10 --after 10 --max 10 --grep ERROR /var/log/dserver/dsever.log

Use --help for more available options. Or go to the DTail page for more information! Hope you find DTail useful!

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2023-09-25 DTail usage examples (You are currently reading this)
2022-10-30 Installing DTail on OpenBSD
2022-03-06 The release of DTail 4.0.0
2021-04-22 DTail - The distributed log tail program

I hope you find the tools presented in this post useful!

Paul

Back to the main site
Site Reliability Engineering - Part 1: SRE and Organizational Culture gemini://foo.zone/gemfeed/2023-08-18-site-reliability-engineering-part-1.gmi 2023-08-18T22:43:47+03:00 Paul Buetow aka snonux paul@dev.buetow.org Being a Site Reliability Engineer (SRE) is like stepping into a lively, ever-evolving universe. The world of SRE mixes together different tech, a unique culture, and a whole lot of determination. It’s one of the toughest but most exciting jobs out there. There's zero chance of getting bored because there's always a fresh challenge to tackle and new technology to play around with. It's not just about the tech side of things either; it's heavily rooted in communication, collaboration, and teamwork. As someone currently working as an SRE, I’m here to break it all down for you in this blog series. Let's dive into what SRE is really all about!

Site Reliability Engineering - Part 1: SRE and Organizational Culture



Published at 2023-08-18T22:43:47+03:00

Being a Site Reliability Engineer (SRE) is like stepping into a lively, ever-evolving universe. The world of SRE mixes together different tech, a unique culture, and a whole lot of determination. It’s one of the toughest but most exciting jobs out there. There's zero chance of getting bored because there's always a fresh challenge to tackle and new technology to play around with. It's not just about the tech side of things either; it's heavily rooted in communication, collaboration, and teamwork. As someone currently working as an SRE, I’m here to break it all down for you in this blog series. Let's dive into what SRE is really all about!

2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture (You are currently reading this)
2023-11-19 Site Reliability Engineering - Part 2: Operational Balance
2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture
2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers

▓▓▓▓░░                                                                                  
                                                                                          
DC on fire:
                                                                                          
                ▓▓                                    ▓▓                ▓▓                
      ░░  ░░    ▓▓▓▓                  ██                  ░░            ▓▓▓▓        ▓▓    
    ▓▓░░░░  ░░  ▓▓▓▓                              ▓▓░░                  ▓▓▓▓              
    ░░░░      ▓▓▓▓▓▓        ▓▓      ▓▓            ▓▓                  ▓▓▓▓▓▓      ▓▓      
    ▓▓░░    ▓▓▒▒▒▒▓▓▓▓    ▓▓        ▓▓▓▓        ▓▓▓▓▓▓              ▓▓▒▒▒▒▓▓▓▓    ▓▓▓▓    
  ██▓▓      ▓▓▒▒░░▒▒▓▓  ▓▓██      ▓▓▓▓▓▓        ▓▓▒▒▓▓              ▓▓▒▒░░▒▒▓▓  ██▓▓▓▓    
  ▓▓▓▓██  ▓▓▒▒░░░░▒▒▓▓  ▓▓▓▓      ▓▓▒▒▒▒▓▓    ▓▓▒▒░░▒▒▓▓██▓▓      ▓▓▒▒░░░░▒▒▓▓  ▓▓▒▒▒▒▓▓  
  ▓▓▒▒▒▒▓▓▓▓▒▒░░▒▒▓▓▓▓▓▓▒▒▒▒▓▓  ▓▓▓▓░░▒▒▓▓    ▓▓▒▒░░▒▒▓▓▒▒▒▒▓▓    ▓▓▒▒░░▒▒▓▓▓▓▓▓▓▓░░▒▒▓▓  
  ▒▒░░▒▒▓▓▓▓▒▒░░▒▒▓▓▓▓▒▒░░▒▒▓▓  ▓▓▒▒░░▒▒▓▓    ▓▓░░░░▒▒▒▒░░░░▒▒██████▒▒░░▒▒██▓▓▓▓▒▒░░▒▒▓▓██
  ░░░░▒▒▓▓▒▒░░▒▒▓▓▓▓▓▓▒▒░░▒▒▓▓██▒▒░░░░▒▒▓▓  ▓▓▒▒░░▒▒▓▓▒▒▒▒░░▒▒▓▓▓▓▒▒░░▒▒▓▓▓▓▓▓▒▒░░░░▒▒▓▓▓▓
  ░░░░▒▒▓▓▒▒░░░░▓▓██▒▒░░░░▒▒▓▓██▒▒░░░░▒▒██▓▓▓▓▒▒░░▒▒▓▓▓▓▒▒░░░░▒▒▓▓▒▒░░░░██▓▓▓▓▒▒░░░░▒▒████
  ▒▒░░▒▒▓▓▓▓░░░░▒▒▓▓▒▒▒▒░░░░▒▒▓▓▓▓▒▒░░░░▒▒▓▓▓▓▒▒░░░░▒▒▓▓▒▒░░▒▒▓▓▓▓▓▓░░░░▒▒▓▓▓▓▓▓▒▒░░░░▒▒▓▓
  ▒▒░░▒▒▓▓▒▒▒▒░░▒▒██▒▒▒▒░░▒▒▒▒██▒▒▒▒░░░░░░▒▒▓▓▒▒░░░░▒▒▒▒░░░░▒▒████▒▒▒▒░░▒▒██▓▓▒▒▒▒░░░░░░▒▒
  ░░░░░░▒▒░░░░░░░░▒▒▒▒▒▒░░░░▒▒▒▒▒▒░░░░░░░░▒▒▒▒░░░░░░▒▒▒▒░░░░░░▒▒▒▒░░░░░░░░▒▒▒▒▒▒░░░░░░░░▒▒
  ░░░░░░░░░░▒▒░░░░░░░░░░░░░░░░░░░░░░░░▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░▒▒░░░░░░░░░░░░░░░░░░

SRE and Organizational Culture: Navigating the Nexus



At the core of SRE is the principle of "prevention over cure." Unlike traditional IT setups that mostly react to problems, SRE focuses on spotting issues before they happen. This proactive approach involves using Service Level Indicators (SLIs) and Service Level Objectives (SLOs). These tools give teams specific metrics and targets to aim for, helping them keep systems reliable and users happy. It's all about creating a culture that prioritizes user experience and makes sure everything runs smoothly to meet their needs.

Another key concept in SRE is the "error budget." It’s a clever approach that recognizes no system is perfect and that failures will happen. Instead of punishing mistakes, SRE culture embraces them as chances to learn and improve. The idea is to give teams a "budget" for errors, creating a space where innovation can thrive and failures are simply seen as lessons learned.

SRE isn't just about tech and metrics; it's also about people. It tackles the "hero culture" that often ends up burning out IT teams. Sure, having a hero swoop in to save the day can be great, but relying on that all the time just isn’t sustainable. Instead, SRE focuses on collective expertise and teamwork. It recognizes that heroes are at their best within a solid team, making the need for constant heroics unnecessary. This way of thinking promotes a balanced on-call experience and highlights trust, ownership, good communication, and collaboration as key to success. I've been there myself, falling into the hero trap, and I know firsthand that it's just not feasible to be the go-to person for every problem that comes up.

Also, the SRE model puts a big emphasis on good documentation. It's not enough to just have docs; they need to be top-notch and go through the same quality checks as code. This really helps with onboarding new team members, training, and keeping everyone on the same page.

Adopting SRE can be a big challenge for some organizations. They might think the SRE approach goes against their goals, like preferring to roll out new features quickly rather than focusing on reliability, or seeing SRE practices as too much hassle. Building an SRE culture often means taking the time to explain things patiently and showing the benefits, like faster release cycles and a better user experience.

Monitoring and observability are also big parts of SRE, highlighting the need for top-notch tools to query and analyze data. This aligns with the SRE focus on continuous learning and being adaptable. SREs naturally need to be curious, ready to dive into any strange issues, and always open to picking up new tools and practices.

For SRE to really work in any organization, everyone needs to buy into its principles. It's about moving away from working in isolated silos and relying on SRE to just patch things up. Instead, it’s about making reliability a shared responsibility across the whole team.

In short, bringing SRE principles into the mix goes beyond just the technical stuff. It helps shift the whole organizational culture to value things like preventing issues before they happen, always learning, working together, and being open with communication. When SRE and corporate culture blend well, you end up with not just reliable systems but also a strong, resilient, and forward-thinking workplace.

Organizations that have SLIs, SLOs, and error budgets in place are already pretty far along in their SRE journey. Getting there takes a lot of communication, convincing people, and patience.

Continue with the second part of this series:

2023-11-19 Site Reliability Engineering - Part 2: Operational Balance

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site
Gemtexter 2.1.0 - Let's Gemtext again³ gemini://foo.zone/gemfeed/2023-07-21-gemtexter-2.1.0-lets-gemtext-again-3.gmi 2023-07-21T10:19:31+03:00 Paul Buetow aka snonux paul@dev.buetow.org I proudly announce that I've released Gemtexter version `2.1.0`. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown, written in GNU Bash.

Gemtexter 2.1.0 - Let's Gemtext again³



Published at 2023-07-21T10:19:31+03:00

I proudly announce that I've released Gemtexter version 2.1.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown, written in GNU Bash.

https://codeberg.org/snonux/gemtexter

-=[ typewriters ]=-  1/98
                                        .-------.
       .-------.                       _|~~ ~~  |_
      _|~~ ~~  |_       .-------.    =(_|_______|_)
    =(_|_______|_)=    _|~~ ~~  |_     |:::::::::|
      |:::::::::|    =(_|_______|_)    |:::::::[]|
      |:::::::[]|      |:::::::::|     |o=======.|
      |o=======.|      |:::::::[]|     `"""""""""`
 jgs  `"""""""""`      |o=======.|
  mod. by Paul Buetow  `"""""""""`

Table of Contents




Why Bash?



This project is too complex for a Bash script. Writing it in Bash was to try out how maintainable a "larger" Bash script could be. It's still pretty maintainable and helps me try new Bash tricks here and then!

Let's list what's new!

Switch to GPL3 license



Many (almost all) of the tools and commands (GNU Bash, GMU Sed, GNU Date, GNU Grep, GNU Source Highlight) used by Gemtexter are licensed under the GPL anyway. So why not use the same? This was an easy switch, as I was the only code contributor so far!

Source code highlighting support



The HTML output now supports source code highlighting, which is pretty neat if your site is about programming. The requirement is to have the source-highlight command, which is GNU Source Highlight, to be installed. Once done, you can annotate a bare block with the language to be highlighted. E.g.:

 ```bash
 if [ -n "$foo" ]; then
   echo "$foo"
 fi
 ```

The result will look like this (you can see the code highlighting only in the Web version, not in the Geminispace version of this site):

if [ -n "$foo" ]; then
  echo "$foo"
fi

Please run source-highlight --lang-list for a list of all supported languages.

HTML exact variant



Gemtexter is there to convert your Gemini Capsule into other formats, such as HTML and Markdown. An HTML exact variant can now be enabled in the gemtexter.conf by adding the line declare -rx HTML_VARIANT=exact. The HTML/CSS output changed to reflect a more exact Gemtext appearance and to respect the same spacing as you would see in the Geminispace.

Use of Hack webfont by default



The Hack web font is a typeface designed explicitly for source code. It's a derivative of the Bitstream Vera and DejaVu Mono lineage, but it features many improvements and refinements that make it better suited to reading and writing code.

The font has distinctive glyphs for every character, which helps to reduce confusion between similar-looking characters. For example, the characters "0" (zero), "O" (capital o), and "o" (lowercase o), or "1" (one), "l" (lowercase L), and "I" (capital i) all have distinct looks in Hack, making it easier to read and understand code at a glance.

Hack is open-source and freely available for use and modification under the MIT License.

HTML Mastodon verification support



The following link explains how URL verification works in Mastodon:

https://joinmastodon.org/verification

So we have to hyperlink to the Mastodon profile to be verified and also to include a rel='me' into the tag. In order to do that add this to the gemtexter.conf (replace the URI to your Mastodon profile accordingly):

declare -xr MASTODON_URI='https://fosstodon.org/@snonux'

and add the following into your index.gmi:

=> https://fosstodon.org/@snonux Me at Mastodon

The resulting line in the HTML output will be something as follows:

<a href='https://fosstodon.org/@snonux' rel='me'>Me at Mastodon</a>

More



Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2024-10-02 Gemtexter 3.0.0 - Let's Gemtext again⁴
2023-07-21 Gemtexter 2.1.0 - Let's Gemtext again³ (You are currently reading this)
2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again²
2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again
2021-06-05 Gemtexter - One Bash script to rule it all
2021-04-24 Welcome to the Geminispace

Back to the main site
'Software Developmers Career Guide and Soft Skills' book notes gemini://foo.zone/gemfeed/2023-07-17-career-guide-and-soft-skills-book-notes.gmi 2023-07-17T04:56:20+03:00 Paul Buetow aka snonux paul@dev.buetow.org These notes are of two books by 'John Sommez' I found helpful. I also added some of my own keypoints to it. These notes are mainly for my own use, but you might find them helpful, too.

"Software Developmers Career Guide and Soft Skills" book notes



Published at 2023-07-17T04:56:20+03:00

These notes are of two books by "John Sommez" I found helpful. I also added some of my own keypoints to it. These notes are mainly for my own use, but you might find them helpful, too.

         ,..........   ..........,
     ,..,'          '.'          ',..,
    ,' ,'            :            ', ',
   ,' ,'             :             ', ',
  ,' ,'              :              ', ',
 ,' ,'............., : ,.............', ',
,'  '............   '.'   ............'  ',
 '''''''''''''''''';''';''''''''''''''''''
                    '''

Table of Contents




Improve



Always learn new things



When you learn something new, e.g. a programming language, first gather an overview, learn from multiple sources, play around and learn by doing and not consuming and form your own questions. Don't read too much upfront. A large amount of time is spent in learning technical skills which were never use. You want to have a practical set of skills you are actually using. You need to know 20 percent to get out 80 percent of the results.

  • Learn a technology with a goal, e.g. implement a tool. Practice practise practice.
  • "I know X can do Y, I don't know exactly how, but I can look it up."
  • Read what experts are writing, for example follow blogs. Stay up to date and spent half an hour per day trading blogs and books.
  • Pick an open source application, read the code and try to understand it to get a feel of the syntax of the programming language.
  • Understand, that the standard library makes you a much better programmer.
  • Self learning is the top skill a programmer can have and is also useful in other aspects in your life.
  • Keep learning skills every day. Code every day. Don't be overconfident for job security. Read blogs, read books.
  • If you want to learn, then do it by exploring. Also teach what you learned (for example write a blog post or hold a presentation).

Fake it until you make it. But be honest about your abilities or lack of. There is however only time between now and until you make it. Refer to your abilities to learn.

Boot camps: The advantage of a boot camp is to pragmatically learn things fast. We almost always overestimate what we can do in a day. Especially during boot camps. Connect to others during the boot camps

Set goals



Your own goals are important but the manager also looks at how the team performs and how someone can help the team perform better. Check whether you are on track with your goals every 2 weeks in order to avoid surprises for the annual review. Make concrete goals for next review. Track and document your progress. Invest in your education. Make your goals known. If you want something, then ask for it. Nobody but you knows what you want.

Ratings



That's a trap: If you have to rate yourself, that's a trap. That never works in an unbiased way. Rate yourself always the best way but rate your weakest part as high as possible minus one point. Rate yourself as good as you can otherwise. Nobody is putting for fun a gun on his own head.

  • Don't do peer rating, it can fire back on you. What if the colleague becomes your new boss?
  • Cooperate rankings are unfortunately HR guidelines and politics and only mirror a little your actual performance.

Promotions



The most valuable employees are the ones who make themselves obsolete and automate all away. Keep a safety net of 3 to 6 months of finances. Safe at least 10 percent of your earnings. Also, if you make money it does not mean that you have to spent more money. Is a new car better than a used car which both can bring you from A to B? Liability vs assets.

  • Raise or promotion, what's better? Promotion is better as money will follow anyway then.
  • Take projects no-one wants and make them shine. A promotion will follow.
  • A promotion is not going to come to you because you deserve it. You have to hunt and ask for it.
  • Track all kudos (e.g. ask for emails from your colleagues).
  • Big corporations HRs don't expect a figjit. That's why it's so important to keep track of your accomplishments and kudos'.
  • If you want a raise be specific how much and know to back your demands. Don't make a thread and no ultimatums.
  • Best way for a promotion is to switch jobs. You can even switch back with a better salary.

Finish things



Hard work is necessary for accomplish results. However, work smarter not harder. Furthermore, working smart is not a substitute for working hard. Work both, hard and smart.

  • Learn to finish things without motivation. Things will pay off when you stick to stuff and eventually motivation can also come back.
  • You will fail if you don't plan realistically. Set also a schedule and follow to it as of life depends on it.
  • Advances come only of you give more than asked. Consistency, commitment and knowing what you need to do is more key than hard work.
  • Any action is better than no action. If you get stuck you have gained nothing.
  • You need to know the unknowns. Identify as many unknown not known things as possible.

Hard vs fun: Both engage the brain (video games vs work). Some work is hard and other is easy. Hard work is boring. The harsh truth is you have to put in hard and boring work in order to accomplish and be successful. Work won't be always boring though, as joy will follow with mastery.

Defeat is finally give up. Failure is the road to success, embrace it. Failure does not define you but how you respond to it. Events don't make your unhappy, but how you react to events do.

Expand the empire



The larger your empire is, the larger your circle of influence is. The larger the circle of influence is, the more opportunities you have.

  • Do the dirty work if you want to expand the empire. That's there the opportunities are.
  • SCRUM often fails due to the lack to commitment. The backlog just becomes a wish to get completed.
  • Apply work on your quality standards. Don't cross the line of compromise. Always improve your skills. Never be happy being good enough.

Become visible, keep track that you accomplishments. E.g. write a weekly summary. Do presentations, be seen. Learn new things and share your learnings. Be the problem solver and not the blamer.

Be pragmatic and also manage your time



Make use of time boxing via the Pomodoro technique: Set a target of rounds and track the rounds. That give you exact focused work time. That's really the trick. For example set a goal of 6 daily pomodores.

  • Every time you do something question why does it make sense be pragmatic and don't follow because it is best practice.
  • You can also apply the time boxing technique (Cal Newport) for focused deep work.

You should feel good of the work done even if you don't finished the task. You will feel good about pomodoro wise even you don't finish the task on hand yet. Helps you to enjoy time off more. Working longer may not sell anything.

The quota system



Defined quota of things done. E.g. N runs per week or M Blog posts per month or O pomodoros per week. This helps with consistency. Truly commit to these quotas. Failure is not an option. Start with small commitments. Don't commit to something you can't fulfill otherwise you set yourself up for failure.

  • Why does the quota System work? Slow and consistent pace is the key. It also overcomes willpower weaknesses as goals are preset.
  • Internal motivation is more important over external motivation. Check out Daniels book drive.
  • Multitasking: Batching is effective. E.g. emails twice daily at pre-set times..

Don't waste time



The biggest time waster is TV watching. The TV is programming you. It's insane that Americans watch so much TV as they work full time. Schedule one show at a time and watch it when you want to watch it. Most movies are crap anyways. The good movies will come to you as people will talk about them.

  • Social media is time waster as well. Schedule your Social Media times. For example be on Facebook only for max one hour on Saturdays.
  • Meetings can waste time as well. Simply don't go to them. Try to cancel meeting if it can be dealt with via email.
  • Enjoying things is not a waste of time. E.g. you could still play a game once in a while. It is important not to cut away all you enjoy from your life.

Habits



Try to have as many good habits as possible. Start with easy habits, and make them a little bit more challenging over time. Set ankers and rewards. Over time the routines will become habits naturally.

Habit stacking is effective, which is combining multiple habits at the same time. For example you can workout on a circular trainer while while watching a learning video on O'Reilly Safari Online while getting closer to your weekly step goal.

  • We don't have control over our habits but our own routines.
  • Routines help to form the habits, though.

Work-life balance



Avoid overwork hours. That's not as beneficial as you might think and comes only with very small rewards. Invest rather in yourself and not in your employer.

  • Work-life balance is a myth. Make it so that you enjoy work and your personal life and not just personal life.
  • Maintain fewer but good relationships. As a reward, better and integrated your life will be.
  • Life in the present Moment. Make the best of every moment of your life.
  • Enjoy every aspect of your life. If you want to take away one thing from this book that is it.

Use your most productive hours to work on you. Make that your priority. Take care of yourself a priority (E.g. do workouts or learn a new language). You can always workout 2 or 1 hour per day, but will you pay the price?

Mental health



  • Friendships and positive thinking help to have and maintain better health, longer Life, better productivity and increased happiness.
  • Positive thinking can be trained and be a habit. Read the book "The Power of Positive Thinking".
  • Stoicism helps. Meditation helps. Playing for fun helps too.

Become the person you want to become (your self image). Program your brain unconsciously. Don't become the person other people want you to be. Embrace yourself, you are you.

In most cases burnout is just an illusion. If you don't have motivation push through the wall. People usually don't pass the wall as they feel they are burned out. After pushing through the wall you will have the most fun, for example you will be able playing the guitar greatly.

Physical health



Utilise a standing desk and treadmill (you could walk and type at the same time). Increase the incline in order to burn more calories. Even on the standing desk you burn more calories than sitting. When you use pomodoro then you can use the small breaks for push-ups (maybe won't do as good when you are in a fasted state).

  • You can only do one thing, lose fat or gain muscles. Not both at the same time.
  • Train your strength by heavy lifting, but only with a very few repetitions (e.g. 5 max for each exercise, everything over this is body building).
  • If you want to increase the muscle mass use medium weights but lift them more often. If you want to increase your endurance lift light weights but with even more reps.
  • Avoid highly processed foods

Intermittent fasting is an effective method to maintain weight and health. But it does not mean that you can only eat junk food in the feeding windows. Also, diet and nutrition is the most important for health and fitness. They make it also easier to stay focused and positive.

No drama



Avoid drama at work. Where are humans there is drama. You can decide where to spent your energy in. But don't avoid conflict. Conflict is healthy in any kind of relationship. Be tactful and state your opinion. The goal is to find the best solution to the problem.

Don't worry about other people what they do and don't do. You only worry about you. Shut up and get your own things done. But you could help to inspire a not working colleague.

  • During an argument, take the opponent's position and see how your opinion changes.
  • If you they to convince someone else it's an argument. Of you try to find the best solution it is a good resolution.
  • If someone is hurting the team let the manager know but phrase it nicely.
  • How to get rid of a never ending talking person? Set up focus hours officially where you don't want to be interrupted. Present as if it is your defect that you get interrupted easily.
  • TOXIC PEOPLE: AVOID THEM. RUN.
  • Boss likes if you get shit done without getting asked all the time about things and also without drama.

You have to learn how to work in a team. Be honest but tactful. It's not too be the loudest but about selling your ideas. Don't argue otherwise you won't sell anything. Be persuasive by finding the common ground. Or lead the colleagues to your idea and don't sell it upfront. Communicate clearly.

Personal brand



  • Invest your value outside the company. Build your personal brand. Show how valuable you are, also to other companies. Become an asset.
  • Invest in your education. Make your goals known. If you want something ask for it (see also the sections about goals in this document).

Market yourself



  • The best way to market yourself is to make you usable.
  • Create a brand. Decide your focus. Throw your name out as often as possible.

Have a blog. Schedule your posts. Consistency beats every other factor. E.g. post once a month a new post. Find your voice, you don't have to sound academic. Keep writing, if you keep it long enough the rewards will be coming. Your own blog can take 5 years to take off. Most people give up too soon.

  • Consistency of your blog is key. Also write quality content. Don't try to be a man of success but try to be a man of value.
  • Have an elevator pitch: "buetow.org - Having fun with computers!"
  • Have social media accounts, especially the ones which are more tech related.

Networking



Ask people so they talk about themselves. They are not really interested in you. Use meetup.com to find groups you are interested and build up the network over time. Don't drink on social networking events even when others do. Talking to other people at events only has upsides. Just saying "hi" and introducing yourself is enough. What worse can happen? If the person rejects you so what, life goes on. Ask open questions and no "yes" and "no" questions. E.g.: "What is your story, why are you here?".

Public speaking



Before your talk go on stage 10 minutes in advance. Introduce yourself to the front row people. During the talk they will smile at you and encourage you during your talk.

  • Try at least 5 times before giving up public speaking. You can also start small, e.g. present a topic at work you are learning.
  • Practise your talk and timing. You can also record your practicing.

Just do it. Just go to conferences. Even if you are not speaking. Sell your boss what you would learn and "this and that" and you would present the learnings to the team afterwards.

New job



For the interview



  • Build up a network before the interview. E.g., follow and comment blogs. Or go to meet-ups and conferences. Join user groups.
  • Ask to touch base before the real interview and ask questions about the company. Do "pre-interviews".
  • Have a blog, a CV can only be 2 pages and an interview only can last only 2 hours. A blog helps you also to be a better communicator.

If you are specialized then there is a better chance to get a fitting job. No one will hire a general lawyer if there are specialized lawyers available. Even if you are specialized, you will have a wide range of skills (T-shape knowledge).

Find the right type of company



Not all companies are equal. They have individual cultures and guidelines.

  • Startup: dynamic and larger impact. Many hats on.
  • Medium size companies: most stable ones. Not cutting edge technologies. No crazy working hours.
  • Large company: very established with a lot of structure however constant layoffs and restructurings. Less impact you can have. Complex politics.
  • Working for yourself: This is harder than you think, probably much harder.

Work in a tech. company if you want to work on/with cutting edge technologies.

Apply for the new job



Get a professional resume writer. Get referrals of writers and get samples from there. Get sufficient with algorithm and data structures interview questions. Cracking the coding interview book and blog

  • Apply for each job with a specialised CV each. Each CV fits the job better.
  • Best get a job via a personal referral or inbound marketing. The latter is somehow rare.
  • Inbound marketing is for example someone responds to your blog and offers you a job.
  • Interview the interviewer. Be persistent.
  • Create creative looking resumes, see simple programmer website. Action-result style for a resume.

Invest in your dress code as appearance masters. It does make sense to invest in your style. You could even hire a professional stylist (not my personal way though).

Negotiation



  • Whoever names the number first loses. You don't know what someone else is expecting unless told. Low ball number may be an issue but you have to know the market.
  • Salary is not about what you need but what you are worth. Try to find out what you are worth.
  • Big tech companies have a pay scale. You can ask for this.
  • Don't tell your current salary. Only do one counter offer and say "If you do X then I commit today". Be tactful and not rude. Nobody wants to be taken advantage of. Also, don't be arrogant.
  • If the company wants to know your range, respond: "I would rather learn more about the job and compensation. You have a range in mind, correct?" Be brave and just pause here.
  • Otherwise, if the company refuses then say "if you tell me what the range is and although I am not yet sure yet what are my exact salary requirements are I can see if the range is of what I am looking for. If they absolute refuse give a high ball range you would expect and make it conditional to the overall compensation package. E.g. 70k to 100k depending on the compensation package. THE LOW END SHOULD BE YOUR REAL LOW END. Play a little bit of hardball here and be brave. Practise it.
  • Put 10 percent on top of the salary range into a counter offer.
  • Everything is negotiable, not only the salary.
  • Job markup rate: Check it regarding the recruitment rate negotiation.
  • Don't make a rushed decision based on deadlines. Make a fairly high counter offer shortly before deadline.
  • You should also cope with rejections while selling yourself. There is no such thing as job security.

  • Never spilt the difference is the best book for learning negotiation techniques..

Leaving the old job



When leaving a job make a clean and non personal as possible. Never complain and never explain. Don't worry about abandonment of the team. Everybody is replacement and you make a business decision. Don't threaten to quit as you are replaceable.

Other things



  • As a leader lead by example and don't lead from the Eiffel tower.
  • As a leader you are responsible for the team. If the team fails then it's your fault only.

Testing



Unit testing Vs regression testing: Unit tests test the smallest possible unit and get rewritten if the unit gets changed. It's like programming against a specification n. Regression tests test whether the software still works after the change. Now you know more than most software engineers.

Books to read



  • Clean Code
  • Code Complete
  • Cracking the Interview - Lessons and Solutions.
  • Daniels Book "Drive" (about internal and external motivation)
  • God's degree (inventor of Dilbert)
  • Head first Design Patterns
  • How to win Friends and influence People
  • Never Split the Difference [X]
  • Structure and programming functional programs
  • The obstacle is the way [X]
  • The passionate programmer
  • The Power of Positive Thinking (Highly religious - I personally don't like it)
  • The Pragmatic Programmer [X]
  • The war of Art (to combat procrastination)
  • Willpower Instinct

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2025-06-07 "A Monk's Guide to Happiness" book notes
2025-04-19 "When: The Scientific Secrets of Perfect Timing" book notes
2024-10-24 "Staff Engineer" book notes
2024-07-07 "The Stoic Challenge" book notes
2024-05-01 "Slow Productivity" book notes
2023-11-11 "Mind Management" book notes
2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes (You are currently reading this)
2023-05-06 "The Obstacle is the Way" book notes
2023-04-01 "Never split the difference" book notes
2023-03-16 "The Pragmatic Programmer" book notes

Back to the main site
KISS server monitoring with Gogios gemini://foo.zone/gemfeed/2023-06-01-kiss-server-monitoring-with-gogios.gmi 2023-06-01T21:10:17+03:00 Paul Buetow aka snonux paul@dev.buetow.org Gogios is a minimalistic and easy-to-use monitoring tool I programmed in Google Go designed specifically for small-scale self-hosted servers and virtual machines. The primary purpose of Gogios is to monitor my personal server infrastructure for `foo.zone`, my MTAs, my authoritative DNS servers, my NextCloud, Wallabag and Anki sync server installations, etc.

KISS server monitoring with Gogios



Published at 2023-06-01T21:10:17+03:00

Gogios is a minimalistic and easy-to-use monitoring tool I programmed in Google Go designed specifically for small-scale self-hosted servers and virtual machines. The primary purpose of Gogios is to monitor my personal server infrastructure for foo.zone, my MTAs, my authoritative DNS servers, my NextCloud, Wallabag and Anki sync server installations, etc.

With compatibility with the Nagios Check API, Gogios offers a simple yet effective solution to monitor a limited number of resources. In theory, Gogios scales to a couple of thousand checks, though. You can clone it from Codeberg here:

https://codeberg.org/snonux/gogios

Gogios logo

Table of Contents




    _____________________________    ____________________________
   /                             \  /                            \
  |    _______________________    ||    ______________________    |
  |   /                       \   ||   /                      \   |
  |   | # Alerts with status c|   ||   | # Unhandled alerts:  |   |
  |   | hanged:               |   ||   |                      |   |
  |   |                       |   ||   | CRITICAL: Check Pizza|   |
  |   | OK->CRITICAL: Check Pi|   ||   | : Late delivery      |   |
  |   | zza: Late delivery    |   ||   |                      |   |
  |   |                       |   ||   | WARNING: Check Thirst|   |
  |   |                       |   ||   | : OutofKombuchaExcept|   |
  |   \_______________________/   ||   \______________________/   |
  |  /|\ GOGIOS MONITOR 1    _    ||  /|\ GOGIOS MONITOR 2   _    |
   \_____________________________/  \____________________________/
     !_________________________!      !________________________!

------------------------------------------------
ASCII art was modified by Paul Buetow
The original can be found at
https://asciiart.website/index.php?art=objects/computers

Motivation



With experience in monitoring solutions like Nagios, Icinga, Prometheus and OpsGenie, these tools often came with many features that I didn't necessarily need for personal use. Contact groups, host groups, check clustering, and the requirement of operating a DBMS and a WebUI added complexity and bloat to my monitoring setup.

My primary goal was to have a single email address for notifications and a simple mechanism to periodically execute standard Nagios check scripts and notify me of any state changes. I wanted the most minimalistic monitoring solution possible but wasn't satisfied with the available options.

This led me to create Gogios, a lightweight monitoring tool tailored to my specific needs. I chose the Go programming language for this project as it comes, in my opinion, with the best balance of ease to use and performance.

Features



  • Compatible with Nagios Check scripts: Gogios leverages the widely-used Nagios Check API, allowing to use existing Nagios plugins.
  • Lightweight and Minimalistic: Gogios is designed to be simple and fairly easy to set up.
  • Configurable Check Timeout and Concurrency: Gogios allows you to set a timeout for checks and configure the number of concurrent checks, offering flexibility in monitoring your resources.
  • Configurable check dependency: A check can depend on another check, which enables scenarios like not executing an HTTP check when the server isn't pingable.
  • Retries: Check retry and retry intervals are configurable per check.
  • Email Notifications: Gogios can send email notifications regarding the status of monitored services, ensuring you stay informed about potential issues.
  • CRON-based Execution: Gogios can be quickly scheduled to run periodically via CRON, allowing you to automate monitoring without needing a complex setup.

Example alert



This is an example alert report received via E-Mail. Whereas, [C:2 W:0 U:0 OK:51] means that we've got two alerts in status critical, 0 warnings, 0 unknowns and 51 OKs.

Subject: GOGIOS Report [C:2 W:0 U:0 OK:51]

This is the recent Gogios report!

# Alerts with status changed:

OK->CRITICAL: Check ICMP4 vulcan.buetow.org: Check command timed out
OK->CRITICAL: Check ICMP6 vulcan.buetow.org: Check command timed out

# Unhandled alerts:

CRITICAL: Check ICMP4 vulcan.buetow.org: Check command timed out
CRITICAL: Check ICMP6 vulcan.buetow.org: Check command timed out

Have a nice day!

Installation



Compiling and installing Gogios



This document is primarily written for OpenBSD, but applying the corresponding steps to any Unix-like (e.g. Linux-based) operating system should be easy. On systems other than OpenBSD, you may always have to replace does with the sudo command and replace the /usr/local/bin path with /usr/bin.

To compile and install Gogios on OpenBSD, follow these steps:

git clone https://codeberg.org/snonux/gogios.git
cd gogios
go build -o gogios cmd/gogios/main.go
doas cp gogios /usr/local/bin/gogios
doas chmod 755 /usr/local/bin/gogios

You can use cross-compilation if you want to compile Gogios for OpenBSD on a Linux system without installing the Go compiler on OpenBSD. Follow these steps:

export GOOS=openbsd
export GOARCH=amd64
go build -o gogios cmd/gogios/main.go

On your OpenBSD system, copy the binary to /usr/local/bin/gogios and set the correct permissions as described in the previous section. All steps described here you could automate with your configuration management system of choice. I use Rexify, the friendly configuration management system, to automate the installation, but that is out of the scope of this document.

https://www.rexify.org

Setting up user, group and directories



It is best to create a dedicated system user and group for Gogios to ensure proper isolation and security. Here are the steps to create the _gogios user and group under OpenBSD:

doas adduser -group _gogios -batch _gogios
doas usermod -d /var/run/gogios _gogios
doas mkdir -p /var/run/gogios
doas chown _gogios:_gogios /var/run/gogios
doas chmod 750 /var/run/gogios

Please note that creating a user and group might differ depending on your operating system. For other operating systems, consult their documentation for creating system users and groups.

Installing monitoring plugins



Gogios relies on external Nagios or Icinga monitoring plugin scripts. On OpenBSD, you can install the monitoring-plugins package with Gogios. The monitoring-plugins package is a collection of monitoring plugins, similar to Nagios plugins, that can be used to monitor various services and resources:

doas pkg_add monitoring-plugins
doas pkg_add nrpe # If you want to execute checks remotely via NRPE.

Once the installation is complete, you can find the monitoring plugins in the /usr/local/libexec/nagios directory, which then can be configured to be used in gogios.json.

Configuration



MTA



Gogios requires a local Mail Transfer Agent (MTA) such as Postfix or OpenBSD SMTPD running on the same server where the CRON job (see about the CRON job further below) is executed. The local MTA handles email delivery, allowing Gogios to send email notifications to monitor status changes. Before using Gogios, ensure that you have a properly configured MTA installed and running on your server to facilitate the sending of emails. Once the MTA is set up and functioning correctly, Gogios can leverage it to send email notifications.

You can use the mail command to send an email via the command line on OpenBSD. Here's an example of how to send a test email to ensure that your email server is working correctly:

echo 'This is a test email from OpenBSD.' | mail -s 'Test Email' your-email@example.com

Check the recipient's inbox to confirm the delivery of the test email. If the email is delivered successfully, it indicates that your email server is configured correctly and functioning. Please check your MTA logs in case of issues.

Configuring Gogios



To configure Gogios, create a JSON configuration file (e.g., /etc/gogios.json). Here's an example configuration:

{
  "EmailTo": "paul@dev.buetow.org",
  "EmailFrom": "gogios@buetow.org",
  "CheckTimeoutS": 10,
  "CheckConcurrency": 2,
  "StateDir": "/var/run/gogios",
  "Checks": {
    "Check ICMP4 www.foo.zone": {
      "Plugin": "/usr/local/libexec/nagios/check_ping",
      "Args": [ "-H", "www.foo.zone", "-4", "-w", "50,10%", "-c", "100,15%" ],
      "Retries": 3,
      "RetryInterval": 10
    },
    "Check ICMP6 www.foo.zone": {
      "Plugin": "/usr/local/libexec/nagios/check_ping",
      "Args": [ "-H", "www.foo.zone", "-6", "-w", "50,10%", "-c", "100,15%" ],
      "Retries": 3,
      "RetryInterval": 10
    },
    "www.foo.zone HTTP IPv4": {
      "Plugin": "/usr/local/libexec/nagios/check_http",
      "Args": ["www.foo.zone", "-4"],
      "DependsOn": ["Check ICMP4 www.foo.zone"]
    },
    "www.foo.zone HTTP IPv6": {
      "Plugin": "/usr/local/libexec/nagios/check_http",
      "Args": ["www.foo.zone", "-6"],
      "DependsOn": ["Check ICMP6 www.foo.zone"]
    }
    "Check NRPE Disk Usage foo.zone": {
      "Plugin": "/usr/local/libexec/nagios/check_nrpe",
      "Args": ["-H", "foo.zone", "-c", "check_disk", "-p", "5666", "-4"]
    }
  }
}

  • EmailTo: Specifies the recipient of the email notifications.
  • EmailFrom: Indicates the sender's email address for email notifications.
  • CheckTimeoutS: Sets the timeout for checks in seconds.
  • CheckConcurrency: Determines the number of concurrent checks that can run simultaneously.
  • StateDir: Specifies the directory where Gogios stores its persistent state in a state.json file.
  • Checks: Defines a list of checks to be performed, each with a unique name, plugin path, and arguments.

Adjust the configuration file according to your needs, specifying the checks you want Gogios to perform.

If you want to execute checks only when another check succeeded (status OK), use DependsOn. In the example above, the HTTP checks won't run when the hosts aren't pingable. They will show up as UNKNOWN in the report.

Retries and RetryInterval are optional check configuration parameters. In case of failure, Gogios will retry Retries times each RetryInterval seconds.

For remote checks, use the check_nrpe plugin. You also need to have the NRPE server set up correctly on the target host (out of scope for this document).

The state.json file mentioned above keeps track of the monitoring state and check results between Gogios runs, enabling Gogios only to send email notifications when there are changes in the check status.

Running Gogios



Now it is time to give it a first run. On OpenBSD, do:

doas -u _gogios /usr/local/bin/gogios -cfg /etc/gogios.json

To run Gogios via CRON on OpenBSD as the gogios user and check all services once per minute, follow these steps:

Type doas crontab -e -u _gogios and press Enter to open the crontab file for the _gogios user for editing and add the following lines to the crontab file:

*/5 8-22 * * * /usr/local/bin/gogios -cfg /etc/gogios.json
0 7 * * * /usr/local/bin/gogios -renotify -cfg /etc/gogios.json

Gogios is now configured to run every five minutes from 8 am to 10 pm via CRON as the _gogios user. It will execute the checks and send monitoring status whenever a check status changes via email according to your configuration. Also, Gogios will run once at 7 am every morning and re-notify all unhandled alerts as a reminder.

High-availability



To create a high-availability Gogios setup, you can install Gogios on two servers that will monitor each other using the NRPE (Nagios Remote Plugin Executor) plugin. By running Gogios in alternate CRON intervals on both servers, you can ensure that even if one server goes down, the other will continue monitoring your infrastructure and sending notifications.

  • Install Gogios on both servers following the compilation and installation instructions provided earlier.
  • Install the NRPE server (out of scope for this document) and plugin on both servers. This plugin allows you to execute Nagios check scripts on remote hosts.
  • Configure Gogios on both servers to monitor each other using the NRPE plugin. Add a check to the Gogios configuration file (/etc/gogios.json) on both servers that uses the NRPE plugin to execute a check script on the other server. For example, if you have Server A and Server B, the configuration on Server A should include a check for Server B, and vice versa.
  • Set up alternate CRON intervals on both servers. Configure the CRON job on Server A to run Gogios at minutes 0, 10, 20, ..., and on Server B to run at minutes 5, 15, 25, ... This will ensure that if one server goes down, the other server will continue monitoring and sending notifications.
  • Gogios doesn't support clustering. So it means when both servers are up, unhandled alerts will be notified via E-Mail twice; from each server once. That's the trade-off for simplicity.

There are plans to make it possible to execute certain checks only on certain nodes (e.g. on elected leader or master nodes). This is still in progress (check out my Gorum Git project).

Conclusion:



Gogios is a lightweight and straightforward monitoring tool that is perfect for small-scale environments. With its compatibility with the Nagios Check API, email notifications, and CRON-based scheduling, Gogios offers an easy-to-use solution for those looking to monitor a limited number of resources. I personally use it to execute around 500 checks on my personal server infrastructure. I am very happy with this solution.

E-Mail your comments to paul@nospam.buetow.org :-)

Other KISS-related posts are:

2024-04-01 KISS high-availability with OpenBSD
2023-10-29 KISS static web photo albums with photoalbum.sh
2023-06-01 KISS server monitoring with Gogios (You are currently reading this)
2021-09-12 Keep it simple and stupid

Back to the main site
'The Obstacle is the Way' book notes gemini://foo.zone/gemfeed/2023-05-06-the-obstacle-is-the-way-book-notes.gmi 2023-05-06T17:23:16+03:00 Paul Buetow aka snonux paul@dev.buetow.org These are my personal takeaways after reading 'The Obstacle Is the Way' by Ryan Holiday. This is mainly for my own use, but you might find it helpful too.

"The Obstacle is the Way" book notes



Published at 2023-05-06T17:23:16+03:00

These are my personal takeaways after reading "The Obstacle Is the Way" by Ryan Holiday. This is mainly for my own use, but you might find it helpful too.

         ,..........   ..........,
     ,..,'          '.'          ',..,
    ,' ,'            :            ', ',
   ,' ,'             :             ', ',
  ,' ,'              :              ', ',
 ,' ,'............., : ,.............', ',
,'  '............   '.'   ............'  ',
 '''''''''''''''''';''';''''''''''''''''''
                    '''

Table of Contents




"The obstacle is the way" is a powerful statement that encapsulates the wisdom of turning challenges into opportunities for growth and success. We will explore using obstacles as fuel, transforming weaknesses into strengths, and adopting a mindset that allows us to be creative and persistent in the face of adversity.

Reframe your perspective



The obstacle in your path can become your path to success. Instead of being paralyzed by challenges, see them as opportunities to learn and grow. Remember, the things that hurt us often instruct us.

We spend a lot of time trying to get things perfect and look at the rules, but what matters is that it works; it doesn't need to be after the book. Focus on results rather than on beautiful methods. In Jujitsu, it does matter that you bring your opponent down, but not how. There are many ways from point A to point B; it doesn't need to be a straight line. So many try to find the best solution but need to catch up on what is in Infront of them. Think progress and not perfection.

Don't always try to use the front door; a backdoor could open. It's nonsense. Don't fight the judo master with judo. Non-action can be action, exposing the weaknesses of others.

Embrace rationality



It is a superpower to see things rationally when others are fearful. Focus on the reality of the situation without letting emotions, such as anger, cloud your judgment. This ability will enable you to make better decisions in adversity. Ability to see things what they really are. E.g. wine is old fermented grapes, or other people behaving like animals during a fight. Show the middle finger if someone persists on the stupid rules occasionally.

Control your response



You can choose how you respond to obstacles. Focus on what you can control, and don't let yourself feel harmed by external circumstances. Remember, you decide how things affect you; nobody else does. Choose to feel good in response to any situation. Embrace the challenges and obstacles that come your way, as they are opportunities for growth and learning.

Practice emotional and physical resilience



Martial artists know the importance of developing physical and emotional strength. Cultivate the art of not panicking; it will help you avoid making mistakes during high-pressure situations.

Focus on what you can control. Don't choose to feel harmed, and then you won't be harmed. I decide things that affect me; nobody else does. E.g., in prison, your mind stays your own. Don't ignore fear but explain it away, have a different view.

Persistence and patience



Practice persistence and patience in your pursuits. Focus on the process rather than the prize and take one step at a time. Remember, the journey is about finishing tasks, projects, or workouts to the best of your ability. Never be in a hurry and never be desperate. There is no reason to be rushed; there are all in the long haul. Follow the process and not the price. Take it one step at a time. The process is about finishing (workout, task, project, etc.).

Embrace failure



Failure is a natural part of life and can make us stronger. Treat defeat as a stepping stone to success and education. What is defeat? The first step to education. Failure makes you stronger. If we do our best, we can be proud of it, regardless of the result. Do your job, but do it right. Only an asshole thinks he is too good at the things he does. Also, asking for forgiveness is easier than asking for permission.

Be adaptable



There are many ways to achieve your goals; sometimes, unconventional methods are necessary. Feel free to break the rules or go off the beaten path if it will lead to better results. Transform weaknesses into strengths. We have a choice of how to respond to things. It's not about being positive but to be creative. Aim high, but stuff will happen; E.g., surprises will always happen.

Embrace non-action



We constantly push to the next thing. Sometimes the best course of action is standing still or even going backwards. Obstacles might resolve by themselves. Or going sideways. Sometimes, the best action is to stand still, go sideways, or even go backwards. Obstacles may resolve themselves or present new opportunities if you're patient and observant. People always want your input before you have all the facts. They want you to play after their rules. The question is, do you let them? The English call it the cool head. Being in control of Stress; requires practice. Appear, the absence of fear (Greek). When all others do it one way, it does not mean it is the correct or best practice.

Leverage crisis



In times of crisis, seize the chance to do things never done before. Great people use negative situations to their advantage and become the most effective in challenging circumstances.

The art of not panicking; otherwise, you will make mistakes. When overs are shocked, you know which way to take due to your thinking of the problem at Hand. A crisis gives you a chance to do things which never done before. Ordinary people shy from negative situations; great people use these for their benefit and are the most effective. The obstacle is not just turned upside down but used as a catapult.

Be prepared for nothing to work. Problems are an opportunity to do your best, not to do miracles. Always manage your expectations. It will suck, but it will be ok. Be prepared to begin from the beginning. Be cheerful and eagerly work on the next obstacle. Each time you become better. Life is not a sprint but a marathon. After each obstacle lies another obstacle, there won't be anything without obstacles. Passing one means you are ready for the next.

Build your inner citadel



Develop your inner strength during good times so you can rely on it in bad times. Always prepare for adversity and face it with calmness and resilience. Be humble enough that things which happen will happen. Build your inner citadel. In good times strengthen it. In bad times rely on it.

We should always prepare for things to get tough. Your house burns down: no worries, we eliminated much rubbish. Imagine what can go wrong before things go wrong. We are prepared for adversity; it's other people who aren't. Phil Jackson's hip problem example. To receive unexpected benefits, you must first accept the unexpected obstacles. Meditate on death. It's a universal obstacle. Use it as a reminder to do your best.

Love everything that happens



Turn an obstacle the other way around for your benefit. Use it at fuel. It's simple but challenging. Most are paralyzed instead. The obstacle in the path becomes the path. Obstacles are neither good nor bad. The things which hurt, instruct.

Should I hate people who hate me? That's their problem and not mine. Be always calm and relaxed during the fight. The story of the battle is the story of the smile. Cheerfulness in all situations, especially the bad ones. Love for everything that happens; if it happens, it was meant to happen. We can choose how we react to things, so why not choose to feel good? I love everything that happens. You must never lower yourself to the person you don't like.

Conclusion



Life is a marathon, not a sprint. Each obstacle we overcome prepares us for the next one. Remember, the obstacle is not just a barrier to be turned upside down; it can also be used as a catapult to propel us forward. By embracing challenges and using them as opportunities for growth, we become stronger, more adaptable, and, ultimately, more successful.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2025-06-07 "A Monk's Guide to Happiness" book notes
2025-04-19 "When: The Scientific Secrets of Perfect Timing" book notes
2024-10-24 "Staff Engineer" book notes
2024-07-07 "The Stoic Challenge" book notes
2024-05-01 "Slow Productivity" book notes
2023-11-11 "Mind Management" book notes
2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes
2023-05-06 "The Obstacle is the Way" book notes (You are currently reading this)
2023-04-01 "Never split the difference" book notes
2023-03-16 "The Pragmatic Programmer" book notes

Back to the main site
Unveiling `guprecords.raku`: Global Uptime Records with Raku gemini://foo.zone/gemfeed/2023-05-01-unveiling-guprecords:-uptime-records-with-raku.gmi 2023-04-30T13:10:26+03:00 Paul Buetow aka snonux paul@dev.buetow.org For fun, I am tracking the uptime of various personal machines (servers, laptops, workstations...). I have been doing this for over ten years now, so I have a lot of statistics collected.

Unveiling guprecords.raku: Global Uptime Records with Raku



Published at 2023-04-30T13:10:26+03:00

+-----+-----------------+-----------------------------+
| Pos |            Host |                    Lifespan |
+-----+-----------------+-----------------------------+
|  1. |        dionysus |  8 years, 6 months, 17 days |
|  2. |          uranus |  7 years, 2 months, 16 days |
|  3. |   alphacentauri |  6 years, 9 months, 13 days |
|  4. |         *vulcan |   4 years, 5 months, 6 days |
|  5. |             sun |  3 years, 10 months, 2 days |
|  6. |           uugrn |   3 years, 5 months, 5 days |
|  7. |       deltavega |  3 years, 1 months, 21 days |
|  8. |           pluto | 2 years, 10 months, 30 days |
|  9. |         tauceti |  2 years, 3 months, 22 days |
| 10. |        callisto |  2 years, 3 months, 13 days |
+-----+-----------------+-----------------------------+

Table of Contents




Introduction



For fun, I am tracking the uptime of various personal machines (servers, laptops, workstations...). I have been doing this for over ten years now, so I have a lot of statistics collected.

As a result of this, I am introducing guprecords.raku, a handy Raku script that helps me combine uptime statistics from multiple servers into one comprehensive report. In this blog post, I'll explore what Guprecords is and some examples of its application. I will also add some notes on Raku.

Guprecords, or global uptime records, is a Raku script designed to generate a consolidated uptime report from multiple hosts:

https://codeberg.org/snonux/guprecords
The Raku Programming Language

A previous version of Guprecords was actually written in Perl, the older and more established language from which Raku was developed. One of the primary motivations for rewriting Guprecords in Raku was to learn the language and explore its features. Raku is a more modern and powerful language compared to Perl, and working on a real-world project like Guprecords provided a practical and engaging way to learn the language.

Over the last years, I have been reading the following books and resources about Raku:

  • Raku Guide (at raku.guide)
  • Think Perl 6
  • Raku Fundamentals
  • Raku Recipes

And I have been following the Raku newsletter, and sometimes I have been lurking around in the IRC channels, too. Watching Raku coding challenges on YouTube was pretty fun, too. However, nothing beats actually using Raku to learn the language. After reading all of these resources, I may have a good idea about the features and paradigms, but I am by far not an expert.

How Guprecords works



Guprecords works in three stages:

  • 1. Generating uptime statistics using uptimed: First, I need to install and run uptimed on each host to generate uptime statistics. This tool is available for most common Linux and *BSD distributions and macOS via Homebrew.
  • 2. Collecting uptime records to a central location: The next step involves collecting the raw uptime statistics files generated by uptimed on each host. It's a good idea to store all record files in a central git repository. The records file contains information about the total uptime since boot, boot time, and the operating system and kernel version. Guprecords itself does not do the collection part, but have a look at the README.md in the git repository for some guidance.
  • 3. Generating global uptime stats: Finally, run the guprecords.raku script with the appropriate flags to create a global uptime report. For example, I can use the following command:

$ raku guprecords.raku --stats=dir=$HOME/git/uprecords/stats --all

This command will generate a comprehensive uptime report from the collected statistics, making it easy to review and enjoy the data.

Guprecords supports the following features:

  • Supports multiple categories: Host, Kernel, KernelMajor, and KernelName
  • Supports multiple metrics: Boots, Uptime, Score, Downtime, and Lifespan
  • Output formats available: Plaintext, Markdown, and Gemtext
  • Provides top entries based on the specified limit

Example



You have already seen an example at the very top of this post, where the hosts were grouped by their total lifespans (uptime+downtime). Here's an example of what the global uptime report (grouped by total host uptimes) might look like:

Top 20 Uptime's by Host

+-----+-----------------+-----------------------------+
| Pos |            Host |                      Uptime |
+-----+-----------------+-----------------------------+
|  1. |         *vulcan |   4 years, 5 months, 6 days |
|  2. |          uranus | 3 years, 11 months, 21 days |
|  3. |             sun |  3 years, 9 months, 26 days |
|  4. |           uugrn |   3 years, 5 months, 5 days |
|  5. |       deltavega |  3 years, 1 months, 21 days |
|  6. |           pluto | 2 years, 10 months, 29 days |
|  7. |         tauceti |  2 years, 3 months, 19 days |
|  8. |       tauceti-f |  1 years, 9 months, 18 days |
|  9. | *ultramega15289 |  1 years, 8 months, 17 days |
| 10. |          *earth |  1 years, 5 months, 22 days |
| 11. |       *blowfish |  1 years, 4 months, 20 days |
| 12. |   ultramega8477 |  1 years, 3 months, 25 days |
| 13. |           host0 |   1 years, 3 months, 9 days |
| 14. |       tauceti-e |  1 years, 2 months, 20 days |
| 15. |        makemake |   1 years, 1 months, 6 days |
| 16. |        callisto | 0 years, 10 months, 31 days |
| 17. |   alphacentauri | 0 years, 10 months, 28 days |
| 18. |          london |  0 years, 9 months, 16 days |
| 19. |         twofish |  0 years, 8 months, 31 days |
| 20. |     *fishfinger |  0 years, 8 months, 17 days |
+-----+-----------------+-----------------------------+

This table ranks the top 20 hosts based on their total uptime, with the host having the highest uptime at the top. The hosts marked with * are still active, means stats were collected within the last couple of months.

My up to date stats can be seen here:

My machine uptime stats

Just recently, I decommissioned vulcan (the number one stop from above), which used to be my CentOS 7 (initially CentOS 6) VM hosting my personal NextCloud and Wallabag (which I modernised just recently with a brand new shiny Rocky Linux 9 VM). This was the last uptimed output before shutting it down (it always makes me feel sentimental decommissioning one of my machines :'-():

     #               Uptime | System                                     Boot up
----------------------------+---------------------------------------------------
     1   545 days, 17:58:15 | Linux 3.10.0-1160.15.2.e  Sun Jul 25 19:32:25 2021
     2   279 days, 10:12:14 | Linux 3.10.0-957.21.3.el  Sun Jun 30 12:43:41 2019
     3   161 days, 06:08:43 | Linux 3.10.0-1160.15.2.e  Sun Feb 14 11:05:38 2021
     4   107 days, 01:26:35 | Linux 3.10.0-957.1.3.el7  Thu Dec 20 09:29:13 2018
     5    96 days, 21:13:49 | Linux 3.10.0-1127.13.1.e  Sat Jul 25 17:56:22 2020
->   6    89 days, 23:05:32 | Linux 3.10.0-1160.81.1.e  Sun Jan 22 12:39:36 2023
     7    63 days, 18:30:45 | Linux 3.10.0-957.10.1.el  Sat Apr 27 18:12:43 2019
     8    63 days, 06:53:33 | Linux 3.10.0-1127.8.2.el  Sat May 23 10:41:08 2020
     9    48 days, 11:44:49 | Linux 3.10.0-1062.18.1.e  Sat Apr  4 22:56:07 2020
    10    42 days, 08:00:13 | Linux 3.10.0-1127.19.1.e  Sat Nov  7 11:47:33 2020
    11    36 days, 22:57:19 | Linux 3.10.0-1160.6.1.el  Sat Dec 19 19:47:57 2020
    12    21 days, 06:16:28 | Linux 3.10.0-957.10.1.el  Sat Apr  6 11:56:01 2019
    13    12 days, 20:11:53 | Linux 3.10.0-1160.11.1.e  Mon Jan 25 18:45:27 2021
    14     7 days, 21:29:18 | Linux 3.10.0-1127.13.1.e  Fri Oct 30 14:18:04 2020
    15     6 days, 20:07:18 | Linux 3.10.0-1160.15.2.e  Sun Feb  7 14:57:35 2021
    16     1 day , 21:46:41 | Linux 3.10.0-957.1.3.el7  Tue Dec 18 11:42:19 2018
    17     0 days, 01:25:57 | Linux 3.10.0-957.1.3.el7  Tue Dec 18 10:16:08 2018
    18     0 days, 00:42:34 | Linux 3.10.0-1160.15.2.e  Sun Jul 25 18:49:38 2021
    19     0 days, 00:08:32 | Linux 3.10.0-1160.81.1.e  Sun Jan 22 12:30:52 2023
----------------------------+---------------------------------------------------
1up in     6 days, 22:08:18 | at                        Sat Apr 29 10:53:25 2023
no1 in   455 days, 18:52:44 | at                        Sun Jul 21 07:37:51 2024
    up  1586 days, 00:20:28 | since                     Tue Dec 18 10:16:08 2018
  down     0 days, 01:08:32 | since                     Tue Dec 18 10:16:08 2018
   %up               99.997 | since                     Tue Dec 18 10:16:08 2018

Conclusion



Guprecords is a small, yet powerful tool for analyzing uptime statistics. While developing Guprecords, I have come to truly appreciate and love Raku's expressiveness. The language is designed to be both powerful and flexible, allowing developers to express their intentions and logic more clearly and concisely.

Raku's expressive syntax, support for multiple programming paradigms, and unique features, such as grammars and lazy evaluation, make it a joy to work with.

Working on Guprecords in Raku has been an enjoyable experience, and I've found that Raku's expressiveness has significantly contributed to the overall quality and effectiveness of the script. The language's ability to elegantly express complex logic and data manipulation tasks makes it an excellent choice for developing tools like these, where expressiveness and productiveness are of the utmost importance.

So far, I have only scratched the surface of what Raku can do. I hope to find more time to become a regular Rakoon (a Raku Programmer). I have many Ideas for other small tools like Guprecords, but the challenge is finding the time. I'd love to explore Raku Grammars and also I would love to explore writing concurrent code in Raku (I also love Go (Golang), btw!). Ideas for future Raku personal projects include:

  • A log file analyzer, for generating anonymized foo.zone visitor stats for both, the Web and Gemini.
  • A social media sharing scheduler a la buffer.com. I am using Buffer at the moment to share posts on Mastadon, Twitter, Telegram and LinkedIn, but it is proprietary and also it's not really reliable.
  • Rewrite the static photo album generator of irregular.ninja in Raku (from Bash).

E-Mail your comments to hi@foo.zone :-)

Other related posts are:

2023-05-01 Unveiling guprecords.raku: Global Uptime Records with Raku (You are currently reading this)
2022-06-15 Sweating the small stuff - Tiny projects of mine
2022-05-27 Perl is still a great choice
2011-05-07 Perl Daemon (Service Framework)
2008-06-26 Perl Poetry

Back to the main site
'Never split the difference' book notes gemini://foo.zone/gemfeed/2023-04-01-never-split-the-difference-book-notes.gmi 2023-04-01T20:00:17+03:00 Paul Buetow aka snonux paul@dev.buetow.org These are my personal takeaways after reading 'Never split the difference' by Chris Voss. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

"Never split the difference" book notes



Published at 2023-04-01T20:00:17+03:00

These are my personal takeaways after reading "Never split the difference" by Chris Voss. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

         ,..........   ..........,
     ,..,'          '.'          ',..,
    ,' ,'            :            ', ',
   ,' ,'             :             ', ',
  ,' ,'              :              ', ',
 ,' ,'............., : ,.............', ',
,'  '............   '.'   ............'  ',
 '''''''''''''''''';''';''''''''''''''''''
                    '''

Table of Contents




Tactical listening, spreading empathy



Be a mirror, copy each other to be comfy with each other to build up trust. Mirroring is mainly body language. A mirror is to repeat the words the other just said. Simple but effective.

  • A mirror needs space and silence between the words. At least 4 seconds.
  • A mirror might be awkward to be used at first, especially with a question coupled to it.
  • We fear what's different and are drawn to what is similar.

Mirror training is like Jedi training. Simple but effective. A mirror needs space. Be silent after "you want this?"

Mindset of discovery



Try to have multiple realities in your mind and use facts to distinguish between real and false.

  • Focus on what the counterpart has to say and what he needs and wants. Understanding him makes him vulnerable.
  • Empathy understanding the other person from his perspective, but it does not mean agreeing with him.
  • Detect and label the emotions of others for your powers.
  • To be understood seems to solve all problems magically.

Try: to put a label on someone's emotion and then be silent. Wait for the other to reveal himself. "You seem unhappy about this?"

More tips



  • Put on a poker face and don't show emotions.
  • Slow things down. Don't be a problem solver.
  • Smile while you are talking, even on the phone. Be easy and encouraging.
  • Being right is not the key to successful negotiation; being mindful is.
  • Be in the safe zone of empathy and acknowledge bad news.

"No" starts the conversation



When the opponent starts with a "no", he feels in control and comfortable. That's why he has to start with "no".

  • "Yes" and "maybe" might be worthless, but "no" starts the conversation.
  • If someone is saying "no" to you, he will be open to what you have to say next.
  • "No" is not stopping the negotiation but will open up opportunities you were not thinking about before.
  • Start with "no". Great negotiators seek "no" because that's when the great discussions begin.
  • A "no" can be scary if you are not used to it. If your biggest fear is "no", then you can't negotiate.

Get a "That's right" when negotiating. Don't get a "you're right". You can summarise the opponent to get a "that's right".

Win-win



Win-win is a naive approach when encountering the win-lose counterpart, but always cooperate. Don't compromise, and don't split the difference. We don't compromise because it's right; we do it because it is easy. You must embrace the hard stuff; that's where the great deals are.

On Deadlines



  • All deadlines are imaginary.
  • Most of the time, deadlines unsettle us without a good reason.
  • They push a deal to a conclusion.
  • They rush the counterpart to cause pressure and anxiety.

Analyse the opponent



  • Understand the motivation of people behind the table as well.
  • Ask how affected they will be.
  • Determine your and the opposite negotiation style. Accommodation, analyst, assertive.
  • Treat them how they need to be treated.

The person on the other side is never the issue; the problem is the issue. Keep this in mind to avoid emotional issues with the person and focus on the problem, not the person. The bond is essential; never create an enemy.

Use different ways of saying "no."



I had paid my rent always in time. I had positive experiences with the building and would be sad for the landlord to lose a good tenant. I am looking for a win-win agreement between us. Pulling out the research, other neighbours offer much lower prices even if your building is a better location and services. How can I effort 200 more....

...then put an extreme anker.

You always have to embrace thoughtful confrontation for good negotiation and life. Don't avoid honest, clear conflict. It will give you the best deals. Compromises are mostly bad deals for both sides. Most people don't negotiate a win-win but a win-lose. Know the best and worst outcomes and what is acceptable for you.

Calibrated question



Calibrated questions. Give the opponent a sense of power. Ask open-how questions to get the opponent to solve your problem and move him in your direction. Calibrated questions are the best tools. Summarise everything, and then ask, "how I am supposed to do that?". Asking for help this way with a calibrated question is a powerful tool for joint problem solving

Being calm and respectful is essential. Without control of your emotions, it won't work. The counterpart will have no idea how constrained they are with your question. Avoid questions which get a yes or short answers. Use "why?".

Counterparts are more involved if these are their solutions. The counterpart must answer with "that's right", not "you are right". He has to own the problem. If not, then add more why questions.

  • Tone and body language need to align with what people are saying.
  • Deal with it via a labelled question.
  • Liers tend to talk with "them" and "their" and not with "I".
  • Also, liars tend to talk in complex sentences.

Prepare 3 to 5 calibrated questions for your counterpart. Be curious what is really motivating the other side. You can get out the "Black Swan".

The black swan



What we don't know can break our deal. Uncovering it can bring us unexpected success. You get what you ask for in this world, but you must learn to ask correctly. Reveal the black swan by asking questions.

More



Establish a range at top places like corp. I get... (e.g. remote London on a project basis). Set a high salary range and not a number. Also, check on LinkedIn premium for the salaries.

  • Give an unexpected gift, e.g. show them my pet project and publicity for engineering.
  • Use an odd number, which makes you seem to have thought a lot about the sum and calculated it.
  • Define success and metrics for your next raise.
  • What does it take to be successful here? Ask the question, and they will tell you and guide you.
  • Set an extreme anker. Make the counterpart the illusion of losing something.
  • Hope-based deals. Hope is not a strategy.
  • Tactical empathy, listening as a martial art. It is emotional intelligence on steroids.
  • Being right isn't the key to a successful negotiation, but having the correct mindset is.
  • Don't shop the groceries when you are hungry.

Slow.... it.... down....

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2025-06-07 "A Monk's Guide to Happiness" book notes
2025-04-19 "When: The Scientific Secrets of Perfect Timing" book notes
2024-10-24 "Staff Engineer" book notes
2024-07-07 "The Stoic Challenge" book notes
2024-05-01 "Slow Productivity" book notes
2023-11-11 "Mind Management" book notes
2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes
2023-05-06 "The Obstacle is the Way" book notes
2023-04-01 "Never split the difference" book notes (You are currently reading this)
2023-03-16 "The Pragmatic Programmer" book notes

Back to the main site
Gemtexter 2.0.0 - Let's Gemtext again² gemini://foo.zone/gemfeed/2023-03-25-gemtexter-2.0.0-lets-gemtext-again-2.gmi 2023-03-25T17:50:32+02:00 Paul Buetow aka snonux paul@dev.buetow.org I proudly announce that I've released Gemtexter version `2.0.0`. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown written in GNU Bash.

Gemtexter 2.0.0 - Let's Gemtext again²



Published at 2023-03-25T17:50:32+02:00

I proudly announce that I've released Gemtexter version 2.0.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown written in GNU Bash.

https://codeberg.org/snonux/gemtexter

This is a new major release, so it contains a breaking change (see "Meta cache made obsolete").

Let's list what's new!

-=[ typewriters ]=-  1/98

       .-------.
      _|~~ ~~  |_       .-------.
    =(_|_______|_)=    _|~~ ~~  |_
      |:::::::::|    =(_|_______|_)
      |:::::::[]|      |:::::::::|
      |o=======.|      |:::::::[]|
 jgs  `"""""""""`      |o=======.|
  mod. by Paul Buetow  `"""""""""`

Table of Contents




Minimal template engine



Gemtexter now supports templating, enabling dynamically generated content to .gmi files before converting anything to any output format like HTML and Markdown.

A template file name must have the suffix gmi.tpl. A template must be put into the same directory as the Gemtext .gmi file to be generated. Gemtexter will generate a Gemtext file index.gmi from a given template index.gmi.tpl. A <<< and >>> encloses a multiline template. All lines starting with << will be evaluated as a single line of Bash code and the output will be written into the resulting Gemtext file.

For example, the template index.gmi.tpl:

# Hello world

<< echo "> This site was generated at $(date --iso-8601=seconds) by \`Gemtexter\`"

Welcome to this capsule!

<<<
  for i in {1..10}; do
    echo Multiline template line $i
  done
>>>

... results into the following index.gmi after running ./gemtexter --generate (or ./gemtexter --template, which instructs to do only template processing and nothing else):

# Hello world

> This site was generated at 2023-03-15T19:07:59+02:00 by `Gemtexter`

Welcome to this capsule!

Multiline template line 1
Multiline template line 2
Multiline template line 3
Multiline template line 4
Multiline template line 5
Multiline template line 6
Multiline template line 7
Multiline template line 8
Multiline template line 9
Multiline template line 10

Another thing you can do is insert an index with links to similar blog posts. E.g.:

See more entries about DTail and Golang:

<< template::inline::rindex dtail golang

Blablabla...

... scans all other post entries with dtail and golang in the file name and generates a link list like this:

See more entries about DTail and Golang:

=> ./2022-10-30-installing-dtail-on-openbsd.gmi 2022-10-30 Installing DTail on OpenBSD
=> ./2022-04-22-programming-golang.gmi 2022-04-22 The Golang Programming language
=> ./2022-03-06-the-release-of-dtail-4.0.0.gmi 2022-03-06 The release of DTail 4.0.0
=> ./2021-04-22-dtail-the-distributed-log-tail-program.gmi 2021-04-22 DTail - The distributed log tail program (You are currently reading this)

Blablabla...

Added hooks



You can configure PRE_GENERATE_HOOK and POST_PUBLISH_HOOK to point to scripts to be executed before running --generate, or after running --publish. E.g. you could populate some of the content by an external script before letting Gemtexter do its thing or you could automatically deploy the site after running --publish.

The sample config file gemtexter.conf includes this as an example now; these scripts will only be executed when they actually exist:

declare -xr PRE_GENERATE_HOOK=./pre_generate_hook.sh
declare -xr POST_PUBLISH_HOOK=./post_publish_hook.sh

Use of safer Bash options



Gemtexter now does set -euf -o pipefile, which helps to eliminate bugs and to catch scripting errors sooner. Previous versions only set -e.

Meta cache made obsolete



Here is the breaking change to older versions of Gemtexter. The $BASE_CONTENT_DIR/meta directory was made obsolete. meta was used to store various information about all the blog post entries to make generating an Atom feed in Bash easier. Especially the publishing dates of each post were stored there. Instead, the publishing date is now encoded in the .gmi file. And if it is missing, Gemtexter will set it to the current date and time at first run.

An example blog post without any publishing date looks like this:

% cat gemfeed/2023-02-26-title-here.gmi
# Title here

The remaining content of the Gemtext file...

Gemtexter will add a line starting with > Published at ... now. Any subsequent Atom feed generation will then use that date.

% cat gemfeed/2023-02-26-title-here.gmi
# Title here

> Published at 2023-02-26T21:43:51+01:00

The remaining content of the Gemtext file...

XMLLint support



Optionally, when the xmllint binary is installed, Gemtexter will perform a simple XML lint check against the Atom feed generated. This is a double-check of whether the Atom feed is a valid XML.

More



Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2024-10-02 Gemtexter 3.0.0 - Let's Gemtext again⁴
2023-07-21 Gemtexter 2.1.0 - Let's Gemtext again³
2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again² (You are currently reading this)
2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again
2021-06-05 Gemtexter - One Bash script to rule it all
2021-04-24 Welcome to the Geminispace

Back to the main site
'The Pragmatic Programmer' book notes gemini://foo.zone/gemfeed/2023-03-16-the-pragmatic-programmer-book-notes.gmi 2023-03-16T00:55:20+02:00 Paul Buetow aka snonux paul@dev.buetow.org These are my personal takeaways after reading 'The Pragmatic Programmer' by David Thomas and Andrew Hunt. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

"The Pragmatic Programmer" book notes



Published at 2023-03-16T00:55:20+02:00

These are my personal takeaways after reading "The Pragmatic Programmer" by David Thomas and Andrew Hunt. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

         ,..........   ..........,
     ,..,'          '.'          ',..,
    ,' ,'            :            ', ',
   ,' ,'             :             ', ',
  ,' ,'              :              ', ',
 ,' ,'............., : ,.............', ',
,'  '............   '.'   ............'  ',
 '''''''''''''''''';''';''''''''''''''''''
                    '''

Think about your work while doing it - every day on every project. Have a feeling of continuous improvement.

  • Be a realist.
  • Smell challenges.
  • Care about your craft.
  • Code can always be flawed, but it can meet the requirements.
  • You should be proud of your code, though.

No one writes perfect code, including you. However:

  • Paranoia is good thinking.
  • Practice defensive programming and crash early.
  • Crashing is often the best thing you can do.
  • Changes should be reversible.

Erlang: Defensive programming is a waste of time. Let it crash. "This can never happen" - don't practise that kind of self-deception when programming.

Leave assertions in the code, even in production. Only leave out the assertions causing the performance issues.

Take small steps, always. Get feedback, too, for each of the steps the code does. Avoid fortune telling. If you have to involve in it, then the step is too large.

Decouple the code (e.g. OOP or functional programming). Prefer interfaces for types and mixins for a class extension over class inheritance.

  • Refactor now and not later.
  • Later, it will be even more painful.

Don't think outside the box. Find the box. The box is more extensive than you think. Think about the hard problem at hand. Do you have to do it a certain way, or do you have to do it at all?

Do what works and not what's fashionable. E.g. does SCRUM make sense? The goal is to deliver deliverables and not to "become" agile.

Continuous learning



Add new tools to your repertoire every day and keep the momentum up. Learning new things is your most crucial aspect. Invest regularly in your knowledge portfolio. The learning process extends your thinking. It does not matter if you will never use it.

  • Learn a new programming language every year.
  • Read a technical book every month.
  • Take courses.

Think critically about everything you learn. Use paper for your notes. There is something special about it.

Stay connected



It's your life, and you own it. Bruce Lee once said:

"I am not on the world to life after your expectations, neither are you to life after mine."

  • Go to meet-ups and actively engage.
  • Stay current.
  • Dealing with computers is hard. Dealing with people is harder.

It's your life. Share it, celebrate it, be proud and have fun.

The story of stone soup



How to motivate others to contribute something (e.g. ideas to a startup):

A kindly, old stranger was walking through the land when he came upon a village. As he entered, the villagers moved towards their homes, locking doors and windows. The stranger smiled and asked, why are you all so frightened. I am a simple traveler, looking for a soft place to stay for the night and a warm place for a meal. "There's not a bite to eat in the whole province," he was told. "We are weak and our children are starving. Better keep moving on." "Oh, I have everything I need," he said. "In fact, I was thinking of making some stone soup to share with all of you." He pulled an iron cauldron from his cloak, filled it with water, and began to build a fire under it. Then, with great ceremony, he drew an ordinary-looking stone from a silken bag and dropped it into the water. By now, hearing the rumor of food, most of the villagers had come out of their homes or watched from their windows. As the stranger sniffed the "broth" and licked his lips in anticipation, hunger began to overcome their fear. "Ahh," the stranger said to himself rather loudly, "I do like a tasty stone soup. Of course, stone soup with cabbage -- that's hard to beat." Soon a villager approached hesitantly, holding a small cabbage he'd retrieved from its hiding place, and added it to the pot. "Wonderful!!" cried the stranger. "You know, I once had stone soup with cabbage and a bit of salt beef as well, and it was fit for a king." The village butcher managed to find some salt beef . . . And so it went, through potatoes, onions, carrots, mushrooms, and so on, until there was indeed a delicious meal for everyone in the village to share. The village elder offered the stranger a great deal of money for the magic stone, but he refused to sell it and traveled on the next day. As he left, the stranger came upon a group of village children standing near the road. He gave the silken bag containing the stone to the youngest child, whispering to a group, "It was not the stone, but the villagers that had performed the magic."

By working together, everyone contributes what they can, achieving a greater good together.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2025-06-07 "A Monk's Guide to Happiness" book notes
2025-04-19 "When: The Scientific Secrets of Perfect Timing" book notes
2024-10-24 "Staff Engineer" book notes
2024-07-07 "The Stoic Challenge" book notes
2024-05-01 "Slow Productivity" book notes
2023-11-11 "Mind Management" book notes
2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes
2023-05-06 "The Obstacle is the Way" book notes
2023-04-01 "Never split the difference" book notes
2023-03-16 "The Pragmatic Programmer" book notes (You are currently reading this)

Back to the main site