From 3de01c850f53fb1581f5c4e9f1c5809f0df10c4c Mon Sep 17 00:00:00 2001 From: Paul Buetow Date: Mon, 2 Dec 2024 23:46:49 +0200 Subject: Update content for html --- gemfeed/atom.xml | 697 ++++++++++++++++++++++++++++--------------------------- 1 file changed, 357 insertions(+), 340 deletions(-) (limited to 'gemfeed/atom.xml') diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml index 9c639df7..996cc470 100644 --- a/gemfeed/atom.xml +++ b/gemfeed/atom.xml @@ -1,11 +1,356 @@ - 2024-12-01T12:52:29+02:00 + 2024-12-02T23:46:16+02:00 foo.zone feed To be in the .zone! https://foo.zone/ + + f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation + + https://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html + 2024-12-02T23:46:16+02:00 + + Paul Buetow aka snonux + paul@dev.buetow.org + + This is the second blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines. + +
+

f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation


+
+This is the second blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.
+
+We set the stage last time; this time, we will set up the hardware for this project.
+
+These are all the posts so far:
+
+2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
+2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)
+
+f3s logo
+
+Logo was generated by ChatGPT.
+
+Let's continue...
+
+

Table of Contents


+
+
+

Deciding on the hardware


+
+Note that the OpenBSD VMs included in the F3S setup (which will be used later in this blog series for internet ingress) are already there. These are virtual machines that I rent at OpenBSD Amsterdam and Hetzner.
+
+https://openbsd.amsterdam
+https://hetzner.cloud
+
+This means that the FreeBSD boxes need to be covered, which will later be running k3s in Linux VMs via bhyve hypervisor.
+
+I've been considering whether to use Raspberry Pis or look for alternatives. It turns out that complete N100-based mini-computers aren't much more expensive than Raspberry Pi 5s, and they don't require assembly. Furthermore, I like that they are AMD64 and not ARM-based, which increases compatibility with some applications (e.g., I might want to virtualize Windows (via bhyve) on one of those, though that's out of scope for this blog series).
+
+

Not ARM but Intel N100


+
+I needed something compact, efficient, and capable enough to handle the demands of a small-scale Kubernetes cluster and preferably something I don't have to assemble a lot. After researching, I decided on the Beelink S12 Pro with Intel N100 CPUs.
+
+Beelink Min S12 Pro N100 official page
+
+The Intel N100 CPUs are built on the "Alder Lake-N" architecture. These chips are designed to balance performance and energy efficiency well. With four cores, they're more than capable of running multiple containers, even with moderate workloads. Plus, they consume only 6W of power, keeping the electricity bill low and the setup quiet - perfect for 24/7 operation.
+
+Beelink preparation
+
+The Beelink comes with the following specs:
+
+
    +
  • 12th Gen Intel N100 processor, with four cores and four threads, and a maximum frequency of up to 3.4 GHz.
  • +
  • 16 GB of DDR4 RAM, with a maximum (official) size of 16 GB (but people could install 32 GB on it).
  • +
  • 500 GB M.2 SSD, with the option to install a 2nd 2.5 SSD drive (which I want to use later in this blog series).
  • +
  • GBit ethernet
  • +
  • Four USB 3.2 Gen2 ports (maybe I want to mount something externally at some point)
  • +
  • Dimensions and weight: 115*102*39mm, 280g
  • +
  • Silent cooling system.
  • +
  • HDMI output (needed only for the initial installation)
  • +
  • Auto power on via WoL (may make use of it)
  • +
  • Wi-Fi (not going to use it)
  • +

+I bought three (3) of them for the cluster I intend to build.
+
+
+
+Unboxing was uneventful. Every Beelink PC came with:
+
+
    +
  • An AC power adapter
  • +
  • An HDMI cable
  • +
  • A VESA mount with screws (not using it as of now)
  • +
  • Some manuals
  • +
  • The pre-assembled Beelink PC itself.
  • +
  • A "Hello" post card (??)
  • +

+Overall, I love the small form factor.
+
+

Network switch


+
+I went with the TP-Link mini 5-port switch, as I had a spare one available. That switch will be plugged into my wall Ethernet port, which connects directly to my fiber internet router with 100 Mbit/s down and 50 Mbit/s upload speed.
+
+Switch
+
+

Installing FreeBSD


+
+

Base install


+
+First, I downloaded the boot-only ISO of the latest FreeBSD release and dumped it on a USB stick on my Fedora laptop:
+
+ +
[paul@earth]~/Downloads% sudo dd \
+  if=FreeBSD-14.1-RELEASE-amd64-bootonly.iso \
+  of=/dev/sda conv=sync
+
+
+Next, I plugged the Beelinks (one after another) into my monitor via HDMI (the resolution of the FreeBSD text console seems strangely stretched, as I am using the LG Dual Up monitor), connected Ethernet, an external USB keyboard, and the FreeBSD USB stick, and booted the devices up. With F7, I entered the boot menu and selected the USB stick for the FreeBSD installation.
+
+The installation was uneventful. I selected:
+
+
    +
  • Guided ZFS on root (pool zroot)
  • +
  • Unencrypted ZFS (I will encrypt separate datasets later; I want it to be able to boot without human interaction)
  • +
  • Static IP configuration (to ensure that the boxes always have the same IPs, even after switching the router/DHCP server)
  • +
  • I decided to enable the SSH daemon, NTP server, and NTP time synchronization at boot, and I also enabled powerd for automatic CPU frequency scaling.
  • +
  • In addition to root, I added a personal user, paul, whom I placed in the wheel group.
  • +

+After doing all that three times (once for each Beelink PC), I had three ready-to-use FreeBSD boxes! Their hostnames are f0, f1 and f2!
+
+Beelink installation
+
+

Latest patch level and customizing /etc/hosts


+
+After the first boot, I upgraded to the latest FreeBSD patch level as follows:
+
+ +
root@f0:~ # freebsd-update fetch
+root@f0:~ # freebsd-update install
+root@f0:~ # freebsd-update reboot
+
+
+I also added the following entries for the three FreeBSD boxes to the /etc/hosts file:
+ +
root@f0:~ # cat <<END >>/etc/hosts
+192.168.1.130 f0 f0.lan f0.lan.buetow.org
+192.168.1.131 f1 f1.lan f1.lan.buetow.org
+192.168.1.132 f2 f2.lan f2.lan.buetow.org
+END
+
+
+

Additional packages after install


+
+After that, I installed the following additional packages:
+
+ +
root@f0:~ # pkg install helix doas zfs-periodic uptimed
+
+
+Helix? It's my favourite text editor. I have nothing against vi but like hx (Helix) more!
+
+doas? It's a pretty neat (and KISS) replacement for sudo. It has far fewer features than sudo, which is supposed to make it more secure. Its origin is the OpenBSD project. For doas, I accepted the default configuration (where users in the wheel group are allowed to run commands as root):
+
+ +
root@f0:~ # cp /usr/local/etc/doas.conf.sample /usr/local/etc/doas.conf
+
+
+zfs-periodic is a nifty tool for automatically creating ZFS snapshots. I decided to go with the following configuration here:
+
+ +
root@f0:~ # sysrc daily_zfs_snapshot_enable=YES
+daily_zfs_snapshot_enable:  -> YES
+root@f0:~ # sysrc daily_zfs_snapshot_pools=zroot
+daily_zfs_snapshot_pools:  -> zroot
+root@f0:~ # sysrc daily_zfs_snapshot_keep=7
+daily_zfs_snapshot_keep:  -> 7
+root@f0:~ # sysrc weekly_zfs_snapshot_enable=YES
+weekly_zfs_snapshot_enable:  -> YES
+root@f0:~ # sysrc weekly_zfs_snapshot_pools=zroot
+weekly_zfs_snapshot_pools:  -> zroot
+root@f0:~ # sysrc weekly_zfs_snapshot_keep=5
+weekly_zfs_snapshot_keep:  -> 5
+root@f0:~ # sysrc monthly_zfs_snapshot_enable=YES
+monthly_zfs_snapshot_enable:  -> YES
+root@f0:~ # sysrc monthly_zfs_snapshot_pools=zroot
+monthly_zfs_snapshot_pools:  -> zroot
+root@f0:~ # sysrc weekly_zfs_snapshot_keep=2
+weekly_zfs_snapshot_keep: 5 -> 2
+
+
+uptimed? I like to track my uptimes. This is how I configured the daemon:
+
+ +
root@f0:~ # cp /usr/local/mimecast/etc/uptimed.conf-dist \
+  /usr/local/mimecast/etc/uptimed.conf 
+root@f0:~ # hx /usr/local/mimecast/etc/uptimed.conf
+
+
+In the Helix editor session, I changed LOG_MAXIMUM_ENTRIES to 0 to keep all uptime entries forever and not cut off at 50 (the default config). After that, I enabled and started uptimed:
+
+ +
root@f0:~ # service uptimed enable
+root@f0:~ # service uptimed start
+
+
+To check the current uptime stats, I can now run uprecords:
+
+ +
 root@f0:~ # uprecords
+     #               Uptime | System                                     Boot up
+----------------------------+---------------------------------------------------
+->   1     0 days, 00:07:34 | FreeBSD 14.1-RELEASE      Mon Dec  2 12:21:44 2024
+----------------------------+---------------------------------------------------
+NewRec     0 days, 00:07:33 | since                     Mon Dec  2 12:21:44 2024
+    up     0 days, 00:07:34 | since                     Mon Dec  2 12:21:44 2024
+  down     0 days, 00:00:00 | since                     Mon Dec  2 12:21:44 2024
+   %up              100.000 | since                     Mon Dec  2 12:21:44 2024
+
+
+

Hardware check


+
+

Ethernet


+
+Works. Nothing eventful, really. It's a cheap Realtek chip, but it will do what it is supposed to do.
+
+ +
paul@f0:~ % ifconfig re0
+re0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
+        options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>
+        ether e8:ff:1e:d7:1c:ac
+        inet 192.168.1.130 netmask 0xffffff00 broadcast 192.168.1.255
+        inet6 fe80::eaff:1eff:fed7:1cac%re0 prefixlen 64 scopeid 0x1
+        inet6 fd22:c702:acb7:0:eaff:1eff:fed7:1cac prefixlen 64 detached autoconf
+        inet6 2a01:5a8:304:1d5c:eaff:1eff:fed7:1cac prefixlen 64 autoconf pltime 10800 vltime 14400
+        media: Ethernet autoselect (1000baseT <full-duplex>)
+        status: active
+        nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
+
+
+

RAM


+
+All there:
+
+ +
paul@f1:~ % sysctl hw.physmem
+hw.physmem: 16902905856
+
+
+
+

CPUs


+
+Work:
+
+ +
paul@f0:~ % sysctl dev.cpu | grep freq:
+dev.cpu.3.freq: 705
+dev.cpu.2.freq: 705
+dev.cpu.1.freq: 604
+dev.cpu.0.freq: 604
+
+
+

CPU throttling


+
+With powerd running, CPU freq is dowthrottled when the box isn't jam-packed. To stress it a bit, I run ubench to see the frequencies being unthrottled again:
+
+ +
paul@f0:~ % doas pkg install ubench
+paul@f0:~ % rehash # For tcsh to find the newly installed command
+paul@f0:~ % ubench &
+paul@f0:~ % sysctl dev.cpu | grep freq:
+dev.cpu.3.freq: 2922
+dev.cpu.2.freq: 2922
+dev.cpu.1.freq: 2923
+dev.cpu.0.freq: 2922
+
+
+Idle, all three Beelinks plus the switch consumed 26.2W. But with ubench stressing all the CPUs, it went up to 38.8W.
+
+Idle consumption.
+
+

Conclusion


+
+The Beelink S12 Pro with Intel N100 CPUs checks all the boxes for a k3s project: compact, efficient, expandable, and affordable. Its compatibility with both Linux and FreeBSD makes it versatile for other use cases, whether as part of your cluster or as a standalone system. If you’re looking for hardware that punches above its weight for Kubernetes, this little device deserves a spot on your shortlist.
+
+Beelinks stacked
+
+To ease cable management, I need to get shorter Ethernet cables. I will place the tower on my shelf, where most of the cables will be hidden (together with a UPS, which will also be added to the setup).
+
+What will be covered in the next post of this series? The bhyve/Rocky Linux and WireGuard setup as described in part 1 of this series.
+
+Other *BSD-related posts:
+
+2016-04-09 Jails and ZFS with Puppet on FreeBSD
+2022-07-30 Let's Encrypt with OpenBSD and Rex
+2022-10-30 Installing DTail on OpenBSD
+2024-01-13 One reason why I love OpenBSD
+2024-04-01 KISS high-availability with OpenBSD
+2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
+2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)
+
+E-Mail your comments to paul@nospam.buetow.org :-)
+
+Back to the main site
+
+
+
f3s: Kubernetes with FreeBSD - Part 1: Setting the stage @@ -26,7 +371,10 @@
I will post a new entry every month or so (there are too many other side projects for more frequent updates—I bet you can understand).

+These are all the posts so far:
+
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)
+2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

f3s logo

@@ -172,6 +520,10 @@
What's your take on self-hosting? Are you planning to move away from managed cloud services? Stay tuned for the second part of this series, where I will likely write about the hardware and the OS setups.

+Read the next post of this series:
+
+f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
+
Other *BSD-related posts:

2016-04-09 Jails and ZFS with Puppet on FreeBSD
@@ -180,6 +532,7 @@ 2024-01-13 One reason why I love OpenBSD
2024-04-01 KISS high-availability with OpenBSD
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)
+2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

E-Mail your comments to paul@nospam.buetow.org :-)

@@ -2647,6 +3000,7 @@ http://www.gnu.org/software/src-highlite --> 2024-01-13 One reason why I love OpenBSD
2024-04-01 KISS high-availability with OpenBSD (You are currently reading this)
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
+2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

Back to the main site
@@ -3004,6 +3358,7 @@ http://www.gnu.org/software/src-highlite --> 2024-01-13 One reason why I love OpenBSD (You are currently reading this)
2024-04-01 KISS high-availability with OpenBSD
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
+2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

Back to the main site
@@ -7949,6 +8304,7 @@ rex commons 2024-01-13 One reason why I love OpenBSD
2024-04-01 KISS high-availability with OpenBSD
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
+2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

Back to the main site
@@ -8667,345 +9023,6 @@ learn () {
E-Mail your comments to paul@nospam.buetow.org :-)

-Back to the main site
- - -
- - The release of DTail 4.0.0 - - https://foo.zone/gemfeed/2022-03-06-the-release-of-dtail-4.0.0.html - 2022-03-06T18:11:39+00:00 - - Paul Buetow aka snonux - paul@dev.buetow.org - - I have recently released DTail 4.0.0 and this blog post goes through all the new goodies. If you want to jump directly to DTail, do it here (there are nice animated gifs which demonstrates the usage pretty well): - -
-

The release of DTail 4.0.0


-
-Published at 2022-03-06T18:11:39+00:00
-
-I have recently released DTail 4.0.0 and this blog post goes through all the new goodies. If you want to jump directly to DTail, do it here (there are nice animated gifs which demonstrates the usage pretty well):
-
-https://dtail.dev
-
-
-                              ,_---~~~~~----._
-                        _,,_,*^____      _____``*g*\"*,
-  ____ _____     _ _   / __/ /'     ^.  /      \ ^@q   f
- |  _ \_   _|_ _(_) |   @f | @))    |  | @))   l  0 _/
- | | | || |/ _` | | |  \`/   \~____ / __ \_____/    \
- | |_| || | (_| | | |   |           _l__l_           I
- |____/ |_|\__,_|_|_|   }          [______]           I
-                        ]            | | |            |
-                        ]             ~ ~             |
-                        |                            |
-                         |                           |
-
-
-

Table of Contents


-
-
-

So, what's new in 4.0.0?


-
-

Rewritten logging


-
-For DTail 4, logging has been completely rewritten. The new package name is "internal/io/dlog". I rewrote the logging because DTail is a special case here: There are logs processed by DTail, there are logs produced by the DTail server itself, there are logs produced by a DTail client itself, there are logs only logged by a DTail client, there are logs only logged by the DTail server, and there are logs logged by both, server and client. There are also different logging levels and outputs involved.
-
-As you can imagine, it becomes fairly complex. There is no ready Go off-shelf logging library which suits my needs and the logging code in DTail 3 was just one big source code file with global variables and it wasn't sustainable to maintain anymore. So why not rewrite it for profit and fun?
-
-There's a are new log level structure now (The log level now can be specified with the "-logLevel" command line flag):
-
-
-// Available log levels.
-const (
-	None    level = iota
-	Fatal   level = iota
-	Error   level = iota
-	Warn    level = iota
-	Info    level = iota
-	Default level = iota
-	Verbose level = iota
-	Debug   level = iota
-	Devel   level = iota
-	Trace   level = iota
-	All     level = iota
-)
-
-
-DTail also supports multiple log outputs (e.g. to file or to stdout). More are now easily pluggable with the new logging package. The output can also be "enriched" (default) or "plain" (read more about that further below).
-
-

Configurable terminal color codes


-
-A complaint I received from the users of DTail 3 were the terminal colors used for the output. Under some circumstances (terminal configuration) it made the output difficult to read so that users defaulted to "--noColor" (disabling colored output completely). I toke it by heart and also rewrote the color handling. It's now possible to configure the foreground and background colors and an attribute (e.g. dim, bold, ...).
-
-The example "dtail.json" configuration file represents the default (now, more reasonable default) color codes used, and it is free to the user to customize them:
-
-
-{
-  "Client": {
-    "TermColorsEnable": true,
-    "TermColors": {
-      "Remote": {
-        "DelimiterAttr": "Dim",
-        "DelimiterBg": "Blue",
-        "DelimiterFg": "Cyan",
-        "RemoteAttr": "Dim",
-        "RemoteBg": "Blue",
-        "RemoteFg": "White",
-        "CountAttr": "Dim",
-        "CountBg": "Blue",
-        "CountFg": "White",
-        "HostnameAttr": "Bold",
-        "HostnameBg": "Blue",
-        "HostnameFg": "White",
-        "IDAttr": "Dim",
-        "IDBg": "Blue",
-        "IDFg": "White",
-        "StatsOkAttr": "None",
-        "StatsOkBg": "Green",
-        "StatsOkFg": "Black",
-        "StatsWarnAttr": "None",
-        "StatsWarnBg": "Red",
-        "StatsWarnFg": "White",
-        "TextAttr": "None",
-        "TextBg": "Black",
-        "TextFg": "White"
-      },
-      "Client": {
-        "DelimiterAttr": "Dim",
-        "DelimiterBg": "Yellow",
-        "DelimiterFg": "Black",
-        "ClientAttr": "Dim",
-        "ClientBg": "Yellow",
-        "ClientFg": "Black",
-        "HostnameAttr": "Dim",
-        "HostnameBg": "Yellow",
-        "HostnameFg": "Black",
-        "TextAttr": "None",
-        "TextBg": "Black",
-        "TextFg": "White"
-      },
-      "Server": {
-        "DelimiterAttr": "AttrDim",
-        "DelimiterBg": "BgCyan",
-        "DelimiterFg": "FgBlack",
-        "ServerAttr": "AttrDim",
-        "ServerBg": "BgCyan",
-        "ServerFg": "FgBlack",
-        "HostnameAttr": "AttrBold",
-        "HostnameBg": "BgCyan",
-        "HostnameFg": "FgBlack",
-        "TextAttr": "AttrNone",
-        "TextBg": "BgBlack",
-        "TextFg": "FgWhite"
-      },
-      "Common": {
-        "SeverityErrorAttr": "AttrBold",
-        "SeverityErrorBg": "BgRed",
-        "SeverityErrorFg": "FgWhite",
-        "SeverityFatalAttr": "AttrBold",
-        "SeverityFatalBg": "BgMagenta",
-        "SeverityFatalFg": "FgWhite",
-        "SeverityWarnAttr": "AttrBold",
-        "SeverityWarnBg": "BgBlack",
-        "SeverityWarnFg": "FgWhite"
-      },
-      "MaprTable": {
-        "DataAttr": "AttrNone",
-        "DataBg": "BgBlue",
-        "DataFg": "FgWhite",
-        "DelimiterAttr": "AttrDim",
-        "DelimiterBg": "BgBlue",
-        "DelimiterFg": "FgWhite",
-        "HeaderAttr": "AttrBold",
-        "HeaderBg": "BgBlue",
-        "HeaderFg": "FgWhite",
-        "HeaderDelimiterAttr": "AttrDim",
-        "HeaderDelimiterBg": "BgBlue",
-        "HeaderDelimiterFg": "FgWhite",
-        "HeaderSortKeyAttr": "AttrUnderline",
-        "HeaderGroupKeyAttr": "AttrReverse",
-        "RawQueryAttr": "AttrDim",
-        "RawQueryBg": "BgBlack",
-        "RawQueryFg": "FgCyan"
-      }
-    }
-  },
-  ...
-}
-
-
-You notice the different sections - these are different contexts:
-
-
    -
  • Remote: Color configuration for all log lines sent remotely from the server to the client.
  • -
  • Client: Color configuration for all lines produced by a DTail client by itself (e.g. status information).
  • -
  • Server: Color configuration for all lines produced by the DTail server by itself and sent to the client (e.g. server warnings or errors)
  • -
  • MaprTable: Color configuration for the map-reduce table output.
  • -
  • Common: Common color configuration used in various places (e.g. when it's not clear what's the current context of a line).
  • -

-When you do so, make sure that you check your "dtail.json" against the JSON schema file. This is to ensure that you don't configure an invalid color accidentally (requires "jsonschema" to be installed on your computer). Furthermore, the schema file is also a good reference for all possible colors available:
-
-
-jsonschema -i dtail.json schemas/dtail.schema.json
-
-
-

Serverless mode


-
-All DTail commands can now operate on log files (and other text files) directly without any DTail server running. So there isn't a need anymore to install a DTail server when you are on the target server already anyway, like the following example shows:
-
-
-% dtail --files /var/log/foo.log
-
-
-or
-
-
-% dmap --files /var/log/foo.log --query 'from TABLE select .... outfile result.csv'
-
-
-The way it works in Go code is that a connection to a server is managed through an interface and in serverless mode DTail calls through that interface to the server code directly without any TCP/IP and SSH connection made in the background. This means, that the binaries are a bit larger (also ship with the code which normally would be executed by the server) but the increase of binary size is not much.
-
-

Shorthand flags


-
-The "--files" from the previous example is now redundant. As a shorthand, It is now possible to do the following instead:
-
-
-% dtail /var/log/foo.log
-
-
-Of course, this also works with all other DTail client commands (dgrep, dcat, ... etc).
-
-

Spartan (aka plain) mode


-
-There's a plain mode, which makes DTail only print out the "plain" text of the files operated on (without any DTail specific enriched output). E.g.:
-
-
-% dcat --plain /etc/passwd > /etc/test
-% diff /etc/test /etc/passwd  # Same content, no diff
-
-
-This might be useful if you wanted to post-process the output.
-
-

Standard input pipe


-
-In serverless mode, you might want to process your data in a pipeline. You can do that now too through an input pipe:
-
-
-% dgrep --plain --regex 'somethingspecial' /var/log/foo.log |
-    dmap --query 'from TABLE select .... outfile result.csv'
-
-
-Or, use any other "standard" tool:
-
-
-% awk '.....' < /some/file | dtail ....
-
-
-

New command dtailhealth


-
-Prior to DTail 4, there was a flag for the "dtail" command to check the health of a remote DTail server (for use with monitoring system such as Nagios). That has been moved out to a separate binary to reduce complexity of the "dtail" command. The following checks whether DTail is operational on the current machine (you could also check a remote instance of DTail server, just adjust the server address).
-
-
-% cat check_dtail.sh
-#!/bin/sh
-
-exec /usr/local/bin/dtailhealth --server localhost:2222
-
-
-

Improved documentation


-
-Some features, such as custom log formats and the map-reduce query language, are now documented. Also, the examples have been updated to reflect the new features added. This also includes the new animated example Gifs (plus documentation how they were created).
-
-I must admit that not all features are documented yet:
-
-
    -
  • Server side scheduled map-reduce queries
  • -
  • Server side continuous map-reduce queries
  • -
  • Some more docs about terminal color customization
  • -
  • Some more docs about log levels
  • -

-That will be added in one of the future releases.
-
-

Integration testing suite


-
-DTail comes already with some unit tests, but what's new is a full integration testing suite which covers all common use cases of all the commands (dtail, dcat, dgrep, dmap) with a server backend and also in serverless mode.
-
-How are the tests implemented? All integration tests are simply unit tests in the "./integrationtests" folder. They must be explicitly activated with:
-
-
-% export DTAIL_INTEGRATION_TEST_RUN_MODE=yes
-
-
-Once done, first compile all commands, and then run the integration tests:
-
-
-% make
-.
-.
-.
-% go clean -testcache
-% go test -race -v ./integrationtests
-
-
-

Improved code


-
-Not that the code quality of DTail has been bad (I have been using Go vet and Go lint for previous releases and will keep using these), but this time I had new tools (such as SonarQube and BlackDuck) in my arsenal to:
-
-
    -
  • Reduce the complexity of a couple of functions (splitting code up into several smaller functions)
  • -
  • Avoid repeating code (this version of DTail doesn't use Go generics yet, though).
  • -

-Other than that, a lot of other code has been refactored as I saw fit.
-
-

Use of memory pools


-
-DTail makes excessive use of string builder and byte buffer objects. For performance reasons, those are now re-used from memory pools.
-
-

What's next


-
-DTail 5 won't be released any time soon I guess, but some 4.x.y releases will follow this year fore sure. I can think of:
-
-
    -
  • New (but backwards compatible) features which don't require a new major version bump (some features have been requested at work internally).
  • -
  • Even more improved documentation.
  • -
  • Dependency updates.
  • -

-I use usually DTail at work, but I have recently installed it on my personal OpenBSD machines too. I might write a small tutorial here (and I might also add the rc scripts as examples to one of the next DTail releases).
-
-I am a bit busy at the moment with two other pet projects of mine (one internal work-project, and one personal one, the latter you will read about in the next couple of months). If you have ideas (or even a patch), then please don't hesitate to contact me (either via E-Mail or a request at GitHub).
-
-E-Mail your comments to paul@nospam.buetow.org :-)
-
-Other related posts are:
-
-2021-04-22 DTail - The distributed log tail program
-2022-03-06 The release of DTail 4.0.0 (You are currently reading this)
-2022-10-30 Installing DTail on OpenBSD
-2023-09-25 DTail usage examples
-
-Thanks!
-
-Paul
-
Back to the main site
-- cgit v1.2.3