diff options
| author | Paul Buetow <paul@buetow.org> | 2024-01-13 23:08:14 +0200 |
|---|---|---|
| committer | Paul Buetow <paul@buetow.org> | 2024-01-13 23:08:14 +0200 |
| commit | 03a2cf8147eb2b06404be42314a3134bf835bad9 (patch) | |
| tree | a6672e800a527beee09182dc22e0474b3a9e1b7d /gemfeed | |
| parent | b86f2ab02bc504ae43128c5db1d1a016d1c9e764 (diff) | |
Update content for gemtext
Diffstat (limited to 'gemfeed')
| -rw-r--r-- | gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi | 7 | ||||
| -rw-r--r-- | gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi.tpl | 394 | ||||
| -rw-r--r-- | gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi | 7 | ||||
| -rw-r--r-- | gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi.tpl | 665 | ||||
| -rw-r--r-- | gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi | 71 | ||||
| -rw-r--r-- | gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi.tpl | 68 | ||||
| -rw-r--r-- | gemfeed/atom.xml | 307 | ||||
| -rw-r--r-- | gemfeed/atom.xml.tmp | 655 | ||||
| -rw-r--r-- | gemfeed/index.gmi | 1 |
9 files changed, 1968 insertions, 207 deletions
diff --git a/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi b/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi index 764e3d7d..c7099c30 100644 --- a/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi +++ b/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi @@ -385,6 +385,13 @@ Of course I am operating multiple Jails on the same host this way with Puppet: All done in a pretty automated manor. +Other *BSD related posts are: + +=> ./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi 2016-04-09 Jails and ZFS with Puppet on FreeBSD (You are currently reading this) +=> ./2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi 2022-07-30 Let's Encrypt with OpenBSD and Rex +=> ./2022-10-30-installing-dtail-on-openbsd.gmi 2022-10-30 Installing DTail on OpenBSD +=> ./2024-01-13-one-reason-why-i-love-openbsd.gmi 2024-01-13 One reason why I love OpenBSD + E-Mail your comments to `paul@nospam.buetow.org` :-) => ../ Back to the main site diff --git a/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi.tpl b/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi.tpl new file mode 100644 index 00000000..29cdc38d --- /dev/null +++ b/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi.tpl @@ -0,0 +1,394 @@ +# Jails and ZFS with Puppet on FreeBSD + +> Published at 2016-04-09T18:29:47+01:00 + +``` + __ __ + (( \---/ )) + )__ __( + / ()___() \ + \ /(_)\ / + \ \_|_/ / + _______> <_______ + //\ |>o<| /\\ + \\/___ ___\// + | | + | | + | | + | | + `--....---' + \ \ + \ `. hjw + \ `. +``` + +Over the last couple of years I wrote quite a few Puppet modules in order to manage my personal server infrastructure. One of them manages FreeBSD Jails and another one ZFS file systems. I thought I would give a brief overview in how it looks and feels. + +## ZFS + +The ZFS module is a pretty basic one. It does not manage ZFS pools yet as I am not creating them often enough which would justify implementing an automation. But let's see how we can create a ZFS file system (on an already given ZFS pool named ztank): + +Puppet snippet: + +``` +zfs::create { 'ztank/foo': + ensure => present, + filesystem => '/srv/foo', + + require => File['/srv'], +} +``` + +Puppet run: + +``` +admin alphacentauri:/opt/git/server/puppet/manifests [1212]% puppet.apply +Password: +Info: Loading facts +Info: Loading facts +Info: Loading facts +Info: Loading facts +Notice: Compiled catalog for alphacentauri.home in environment production in 7.14 seconds +Info: Applying configuration version '1460189837' +Info: mount[files]: allowing * access +Info: mount[restricted]: allowing * access +Notice: /Stage[main]/Main/Node[alphacentauri]/Zfs::Create[ztank/foo]/Exec[ztank/foo_create]/returns: executed successfully +Notice: Finished catalog run in 25.41 seconds +admin alphacentauri:~ [1213]% zfs list | grep foo +ztank/foo 96K 1.13T 96K /srv/foo +admin alphacentauri:~ [1214]% df | grep foo +ztank/foo 1214493520 96 1214493424 0% /srv/foo +admin alphacentauri:~ [1215]% +``` + +The destruction of the file system just requires to set "ensure" to "absent" in Puppet: + +``` +zfs::create { 'ztank/foo': + ensure => absent, + filesystem => '/srv/foo', + + require => File['/srv'], +}¬ +``` + +Puppet run: + +``` +admin alphacentauri:/opt/git/server/puppet/manifests [1220]% puppet.apply +Password: +Info: Loading facts +Info: Loading facts +Info: Loading facts +Info: Loading facts +Notice: Compiled catalog for alphacentauri.home in environment production in 6.14 seconds +Info: Applying configuration version '1460190203' +Info: mount[files]: allowing * access +Info: mount[restricted]: allowing * access +Notice: /Stage[main]/Main/Node[alphacentauri]/Zfs::Create[ztank/foo]/Exec[zfs destroy -r ztank/foo]/returns: executed successfully +Notice: Finished catalog run in 22.72 seconds +admin alphacentauri:/opt/git/server/puppet/manifests [1221]% zfs list | grep foo +zsh: done zfs list | +zsh: exit 1 grep foo +admin alphacentauri:/opt/git/server/puppet/manifests [1222:1]% df | grep foo +zsh: done df | +zsh: exit 1 grep foo +``` + +## Jails + +Here is an example in how a FreeBSD Jail can be created. The Jail will have its own public IPv6 address. And it will have its own internal IPv4 address with IPv4 NAT to the internet (this is due to the limitation that the host server only got one public IPv4 address which requires sharing between all the Jails). + +Furthermore, Puppet will ensure that the Jail will have its own ZFS file system (internally it is using the ZFS module). Please notice that the NAT requires the packet filter to be setup correctly (not covered in this blog post). + +``` +include jail::freebsd + +# Cloned interface for Jail IPv4 NAT +freebsd::rc_config { 'cloned_interfaces': + value => 'lo1', +} +freebsd::rc_config { 'ipv4_addrs_lo1': + value => '192.168.0.1-24/24' +} + +freebsd::ipalias { '2a01:4f8:120:30e8::17': + ensure => up, + proto => 'inet6', + preflen => '64', + interface => 're0', + aliasnum => '8', +} + +class { 'jail': + ensure => present, + jails_config => { + sync => { + '_ensure' => present, + '_type' => 'freebsd', + '_mirror' => 'ftp://ftp.de.freebsd.org', + '_remote_path' => 'FreeBSD/releases/amd64/10.1-RELEASE', + '_dists' => [ 'base.txz', 'doc.txz', ], + '_ensure_directories' => [ '/opt', '/opt/enc' ], + '_ensure_zfs' => [ '/sync' ], + 'host.hostname' => "'sync.ian.buetow.org'", + 'ip4.addr' => '192.168.0.17', + 'ip6.addr' => '2a01:4f8:120:30e8::17', + }, + } +} +``` + +This is how the result looks like: + +``` +admin sun:/etc [1939]% puppet.apply +Info: Loading facts +Info: Loading facts +Info: Loading facts +Info: Loading facts +Notice: Compiled catalog for sun.ian.buetow.org in environment production in 1.80 seconds +Info: Applying configuration version '1460190986' +Notice: /Stage[main]/Jail/File[/etc/jail.conf]/ensure: created +Info: mount[files]: allowing * access +Info: mount[restricted]: allowing * access +Info: Computing checksum on file /etc/motd +Info: /Stage[main]/Motd/File[/etc/motd]: Filebucketed /etc/motd to puppet with sum fced1b6e89f50ef2c40b0d7fba9defe8 +Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync]/ensure: created +Notice: /Stage[main]/Jail/Jail::Create[sync]/Zfs::Create[zroot/jail/sync]/Exec[zroot/jail/sync_create]/returns: executed successfully +Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync/opt]/ensure: created +Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync/opt/enc]/ensure: created +Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Ensure_zfs[/sync]/Zfs::Create[zroot/jail/sync/sync]/Exec[zroot/jail/sync/sync_create]/returns: executed successfully +Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/jail/sync/.jailbootstrap]/ensure: created +Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/etc/fstab.jail.sync]/ensure: created +Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/jail/sync/.jailbootstrap/bootstrap.sh]/ensure: created +Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/Exec[sync_bootstrap]/returns: executed successfully +Notice: Finished catalog run in 49.72 seconds +admin sun:/etc [1942]% ls -l /jail/sync +total 154 +-r--r--r-- 1 root wheel 6198 11 Nov 2014 COPYRIGHT +drwxr-xr-x 2 root wheel 47 11 Nov 2014 bin +drwxr-xr-x 7 root wheel 43 11 Nov 2014 boot +dr-xr-xr-x 2 root wheel 2 11 Nov 2014 dev +drwxr-xr-x 23 root wheel 101 9 Apr 10:37 etc +drwxr-xr-x 3 root wheel 50 11 Nov 2014 lib +drwxr-xr-x 3 root wheel 4 11 Nov 2014 libexec +drwxr-xr-x 2 root wheel 2 11 Nov 2014 media +drwxr-xr-x 2 root wheel 2 11 Nov 2014 mnt +drwxr-xr-x 3 root wheel 3 9 Apr 10:36 opt +dr-xr-xr-x 2 root wheel 2 11 Nov 2014 proc +drwxr-xr-x 2 root wheel 143 11 Nov 2014 rescue +drwxr-xr-x 2 root wheel 6 11 Nov 2014 root +drwxr-xr-x 2 root wheel 132 11 Nov 2014 sbin +drwxr-xr-x 2 root wheel 2 9 Apr 10:36 sync +lrwxr-xr-x 1 root wheel 11 11 Nov 2014 sys -> usr/src/sys +drwxrwxrwt 2 root wheel 2 11 Nov 2014 tmp +drwxr-xr-x 14 root wheel 14 11 Nov 2014 usr +drwxr-xr-x 24 root wheel 24 11 Nov 2014 var +admin sun:/etc [1943]% zfs list | grep sync;df | grep sync +zroot/jail/sync 162M 343G 162M /jail/sync +zroot/jail/sync/sync 144K 343G 144K /jail/sync/sync +/opt/enc 5061624 84248 4572448 2% /jail/sync/opt/enc +zroot/jail/sync 360214972 166372 360048600 0% /jail/sync +zroot/jail/sync/sync 360048744 144 360048600 0% /jail/sync/sync +admin sun:/etc [1944]% cat /etc/fstab.jail.sync +# Generated by Puppet for a Jail. +# Can contain file systems to be mounted curing jail start. +admin sun:/etc [1945]% cat /etc/jail.conf +# Generated by Puppet + +allow.chflags = true; +exec.start = '/bin/sh /etc/rc'; +exec.stop = '/bin/sh /etc/rc.shutdown'; +mount.devfs = true; +mount.fstab = "/etc/fstab.jail.$name"; +path = "/jail/$name"; + +sync { + host.hostname = 'sync.ian.buetow.org'; + ip4.addr = 192.168.0.17; + ip6.addr = 2a01:4f8:120:30e8::17; +} +admin sun:/etc [1955]% sudo service jail start sync +Password: +Starting jails: sync. +admin sun:/etc [1956]% jls | grep sync + 103 192.168.0.17 sync.ian.buetow.org /jail/sync +admin sun:/etc [1957]% sudo jexec 103 /bin/csh +root@sync:/ # ifconfig -a +re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 + options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE> + ether 50:46:5d:9f:fd:1e + inet6 2a01:4f8:120:30e8::17 prefixlen 64 + nd6 options=8021<PERFORMNUD,AUTO_LINKLOCAL,DEFAULTIF> + media: Ethernet autoselect (1000baseT <full-duplex>) + status: active +lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 + options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6> + nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> + lo1: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 + options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6> + inet 192.168.0.17 netmask 0xffffffff + nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> +``` + +## Inside-Jail Puppet + +To automatically setup the applications running in the Jail I am using Puppet as well. I wrote a few scripts which bootstrap Puppet inside of a newly created Jail. It is doing the following: + +* Mounts an encrypted container (containing a secret Puppet manifests [git repository]) +* Activates "pkg-ng", the FreeBSD binary package manager, in the Jail +* Installs Puppet plus all dependencies in the Jail +* Updates the Jail via "freebsd-update" to the latest version +* Restarts the Jail and invokes Puppet. +* Puppet then also schedules a periodic cron job for the next Puppet runs. + +``` +admin sun:~ [1951]% sudo /opt/snonux/local/etc/init.d/enc activate sync +Starting jails: dns. +The package management tool is not yet installed on your system. +Do you want to fetch and install it now? [y/N]: y +Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/freebsd:10:x86:64/latest, please wait... +Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done +[sync.ian.buetow.org] Installing pkg-1.7.2... +[sync.ian.buetow.org] Extracting pkg-1.7.2: 100% +Updating FreeBSD repository catalogue... +[sync.ian.buetow.org] Fetching meta.txz: 100% 944 B 0.9kB/s 00:01 +[sync.ian.buetow.org] Fetching packagesite.txz: 100% 5 MiB 5.6MB/s 00:01 +Processing entries: 100% +FreeBSD repository update completed. 25091 packages processed. +Updating database digests format: 100% +The following 20 package(s) will be affected (of 0 checked): + + New packages to be INSTALLED: + git: 2.7.4_1 + expat: 2.1.0_3 + python27: 2.7.11_1 + libffi: 3.2.1 + indexinfo: 0.2.4 + gettext-runtime: 0.19.7 + p5-Error: 0.17024 + perl5: 5.20.3_9 + cvsps: 2.1_1 + p5-Authen-SASL: 2.16_1 + p5-Digest-HMAC: 1.03_1 + p5-GSSAPI: 0.28_1 + curl: 7.48.0_1 + ca_root_nss: 3.22.2 + p5-Net-SMTP-SSL: 1.03 + p5-IO-Socket-SSL: 2.024 + p5-Net-SSLeay: 1.72 + p5-IO-Socket-IP: 0.37 + p5-Socket: 2.021 + p5-Mozilla-CA: 20160104 + + The process will require 144 MiB more space. + 30 MiB to be downloaded. +[sync.ian.buetow.org] Fetching git-2.7.4_1.txz: 100% 4 MiB 3.7MB/s 00:01 +[sync.ian.buetow.org] Fetching expat-2.1.0_3.txz: 100% 98 KiB 100.2kB/s 00:01 +[sync.ian.buetow.org] Fetching python27-2.7.11_1.txz: 100% 10 MiB 10.7MB/s 00:01 +[sync.ian.buetow.org] Fetching libffi-3.2.1.txz: 100% 35 KiB 36.2kB/s 00:01 +[sync.ian.buetow.org] Fetching indexinfo-0.2.4.txz: 100% 5 KiB 5.0kB/s 00:01 +[sync.ian.buetow.org] Fetching gettext-runtime-0.19.7.txz: 100% 148 KiB 151.1kB/s 00:01 +[sync.ian.buetow.org] Fetching p5-Error-0.17024.txz: 100% 24 KiB 24.8kB/s 00:01 +[sync.ian.buetow.org] Fetching perl5-5.20.3_9.txz: 100% 13 MiB 6.9MB/s 00:02 +[sync.ian.buetow.org] Fetching cvsps-2.1_1.txz: 100% 41 KiB 42.1kB/s 00:01 +[sync.ian.buetow.org] Fetching p5-Authen-SASL-2.16_1.txz: 100% 44 KiB 45.1kB/s 00:01 +[sync.ian.buetow.org] Fetching p5-Digest-HMAC-1.03_1.txz: 100% 9 KiB 9.5kB/s 00:01 +[sync.ian.buetow.org] Fetching p5-GSSAPI-0.28_1.txz: 100% 41 KiB 41.7kB/s 00:01 +[sync.ian.buetow.org] Fetching curl-7.48.0_1.txz: 100% 2 MiB 2.2MB/s 00:01 +[sync.ian.buetow.org] Fetching ca_root_nss-3.22.2.txz: 100% 324 KiB 331.4kB/s 00:01 +[sync.ian.buetow.org] Fetching p5-Net-SMTP-SSL-1.03.txz: 100% 11 KiB 10.8kB/s 00:01 +[sync.ian.buetow.org] Fetching p5-IO-Socket-SSL-2.024.txz: 100% 153 KiB 156.4kB/s 00:01 +[sync.ian.buetow.org] Fetching p5-Net-SSLeay-1.72.txz: 100% 234 KiB 239.3kB/s 00:01 +[sync.ian.buetow.org] Fetching p5-IO-Socket-IP-0.37.txz: 100% 27 KiB 27.4kB/s 00:01 +[sync.ian.buetow.org] Fetching p5-Socket-2.021.txz: 100% 37 KiB 38.0kB/s 00:01 +[sync.ian.buetow.org] Fetching p5-Mozilla-CA-20160104.txz: 100% 147 KiB 150.8kB/s 00:01 +Checking integrity... +[sync.ian.buetow.org] [1/12] Installing libyaml-0.1.6_2... +[sync.ian.buetow.org] [1/12] Extracting libyaml-0.1.6_2: 100% +[sync.ian.buetow.org] [2/12] Installing libedit-3.1.20150325_2... +[sync.ian.buetow.org] [2/12] Extracting libedit-3.1.20150325_2: 100% +[sync.ian.buetow.org] [3/12] Installing ruby-2.2.4,1... +[sync.ian.buetow.org] [3/12] Extracting ruby-2.2.4,1: 100% +[sync.ian.buetow.org] [4/12] Installing ruby22-gems-2.6.2... +[sync.ian.buetow.org] [4/12] Extracting ruby22-gems-2.6.2: 100% +[sync.ian.buetow.org] [5/12] Installing libxml2-2.9.3... +[sync.ian.buetow.org] [5/12] Extracting libxml2-2.9.3: 100% +[sync.ian.buetow.org] [6/12] Installing dmidecode-3.0... +[sync.ian.buetow.org] [6/12] Extracting dmidecode-3.0: 100% +[sync.ian.buetow.org] [7/12] Installing rubygem-json_pure-1.8.3... +[sync.ian.buetow.org] [7/12] Extracting rubygem-json_pure-1.8.3: 100% +[sync.ian.buetow.org] [8/12] Installing augeas-1.4.0... +[sync.ian.buetow.org] [8/12] Extracting augeas-1.4.0: 100% +[sync.ian.buetow.org] [9/12] Installing rubygem-facter-2.4.4... +[sync.ian.buetow.org] [9/12] Extracting rubygem-facter-2.4.4: 100% +[sync.ian.buetow.org] [10/12] Installing rubygem-hiera1-1.3.4_1... +[sync.ian.buetow.org] [10/12] Extracting rubygem-hiera1-1.3.4_1: 100% +[sync.ian.buetow.org] [11/12] Installing rubygem-ruby-augeas-0.5.0_2... +[sync.ian.buetow.org] [11/12] Extracting rubygem-ruby-augeas-0.5.0_2: 100% +[sync.ian.buetow.org] [12/12] Installing puppet38-3.8.4_1... +===> Creating users and/or groups. +Creating group 'puppet' with gid '814'. +Creating user 'puppet' with uid '814'. +[sync.ian.buetow.org] [12/12] Extracting puppet38-3.8.4_1: 100% +. +. +. +. +. +Looking up update.FreeBSD.org mirrors... 4 mirrors found. +Fetching public key from update4.freebsd.org... done. +Fetching metadata signature for 10.1-RELEASE from update4.freebsd.org... done. +Fetching metadata index... done. +Fetching 2 metadata files... done. +Inspecting system... done. +Preparing to download files... done. +Fetching 874 patches.....10....20....30.... +. +. +. +Applying patches... done. +Fetching 1594 files... +Installing updates... +done. +Info: Loading facts +Info: Loading facts +Info: Loading facts +Info: Loading facts +Could not retrieve fact='pkgng_version', resolution='<anonymous>': undefined method `pkgng_enabled' for Facter:Module +Warning: Config file /usr/local/etc/puppet/hiera.yaml not found, using Hiera defaults +Notice: Compiled catalog for sync.ian.buetow.org in environment production in 1.31 seconds +Warning: Found multiple default providers for package: pkgng, gem, pip; using pkgng +Info: Applying configuration version '1460192563' +Notice: /Stage[main]/S_base_freebsd/User[root]/shell: shell changed '/bin/csh' to '/bin/tcsh' +Notice: /Stage[main]/S_user::Root_files/S_user::All_files[root_user]/File[/root/user]/ensure: created +Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/userfiles]/ensure: created +Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/.task]/ensure: created +. +. +. +. +Notice: Finished catalog run in 206.09 seconds +``` + +## Managing multiple Jails + +Of course I am operating multiple Jails on the same host this way with Puppet: + +* A Jail for the MTA +* A Jail for the Webserver +* A Jail for BIND DNS server +* A Jail for syncing data forth and back between various servers +* A Jail for other personal (experimental) use +* ...etc + +All done in a pretty automated manor. + +Other *BSD related posts are: + +<< template::inline::index bsd + +E-Mail your comments to `paul@nospam.buetow.org` :-) + +=> ../ Back to the main site diff --git a/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi b/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi index 41af24e3..1c9ea500 100644 --- a/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi +++ b/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi @@ -656,6 +656,13 @@ OpenBSD suits perfectly here as all the tools are already part of the base insta Why re-inventing the wheel? I love that a `Rexfile` is just a Perl DSL. Also, OpenBSD comes with Perl in the base system. So no new programming language had to be added to my mix for the configuration management system. Also, the `acme.sh` shell script is not a Bash but a standard Bourne shell script, so I didn't have to install an additional shell as OpenBSD does not come with the Bash pre-installed. +Other *BSD related posts are: + +=> ./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi 2016-04-09 Jails and ZFS with Puppet on FreeBSD +=> ./2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi 2022-07-30 Let's Encrypt with OpenBSD and Rex (You are currently reading this) +=> ./2022-10-30-installing-dtail-on-openbsd.gmi 2022-10-30 Installing DTail on OpenBSD +=> ./2024-01-13-one-reason-why-i-love-openbsd.gmi 2024-01-13 One reason why I love OpenBSD + E-Mail your comments to `paul@nospam.buetow.org` :-) => ../ Back to the main site diff --git a/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi.tpl b/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi.tpl new file mode 100644 index 00000000..1a83ef67 --- /dev/null +++ b/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi.tpl @@ -0,0 +1,665 @@ +# Let's Encrypt with OpenBSD and Rex + +> Published at 2022-07-30T12:14:31+01:00 + +``` + / _ \ + The Hebern Machine \ ." ". / + ___ / \ + .."" "".. | O | + / \ | | + / \ | | + --------------------------------- + _/ o (O) o _ | + _/ ." ". | + I/ _________________/ \ | + _/I ." | | + ===== / I / / | + ===== | | | \ | _________________." | +===== | | | | | / \ / _|_|__|_|_ __ | + | | | | | | | \ "._." / o o \ ." ". | + | --| --| -| / \ _/ / \ | + \____\____\__| \ ______ | / | | | + -------- --- / | | | + ( ) (O) / \ / | + ----------------------- ".__." | + _|__________________________________________|_ + / \ + /________________________________________________\ + ASCII Art by John Savard +``` + +I was amazed at how easy it is to automatically generate and update Let's Encrypt certificates with OpenBSD. + +## What's Let's Encrypt? + +> Let's Encrypt is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security (TLS) encryption at no charge. It is the world's largest certificate authority, used by more than 265 million websites, with the goal of all websites being secure and using HTTPS. + +=> https://en.wikipedia.org/wiki/Let's_Encrypt Source: Wikipedia + +In short, it gives away TLS certificates for your website - for free! The catch is, that the certificates are only valid for three months. So it is better to automate certificate generation and renewals. + +## Meet `acme-client` + +`acme-client` is the default Automatic Certifcate Management Environment (ACME) client on OpenBSD and part of the OpenBSD base system. + +When invoked, the client first checks whether certificates actually require to be generated. + +* It first checks whether a certificate already exists; if not, it will attempt to generate a new one. +* If the certificate already exists but expires within the next 30 days, it will renew it. +* Otherwise, `acme-client` won't do anything. + +Oversimplified, the following steps are undertaken by `acme-client` for generating a new certificate: + +* Reading its config file `/etc/acme-client.conf` for a list of hosts (and their alternative names) to generate certificates. So it means you can also have certificates for arbitrary subdomains! +* Automatic generation of the private certificate part (the certificate key) and the certificate signing request (CSR) to `/etc/ssl/...`. +* Requesting Let's Encrypt to sign the certificate. This also includes providing a set of temporary files requested by Let's Encrypt in the next step for verification. +* Let's Encrypt then will contact the hostname for the certificate through a particular URL (e.g. `http://foo.zone/.well-known/acme-challenge/...`) to verify that the requester is the valid owner of the host. +* Let's Encrypt generates a certificate, which then is downloaded to `/etc/ssl/...`. + +## Configuration + +There is some (but easy) configuration required to make that all work on OpenBSD. + +### acme-client.conf + +This is how my `/etc/acme-client.conf` looks like (I copied a template from `/etc/examples/acme-client.conf` to `/etc/acme-client.conf` and added my domains to the bottom: + +``` +# +# $OpenBSD: acme-client.conf,v 1.4 2020/09/17 09:13:06 florian Exp $ +# +authority letsencrypt { + api url "https://acme-v02.api.letsencrypt.org/directory" + account key "/etc/acme/letsencrypt-privkey.pem" +} + +authority letsencrypt-staging { + api url "https://acme-staging-v02.api.letsencrypt.org/directory" + account key "/etc/acme/letsencrypt-staging-privkey.pem" +} + +authority buypass { + api url "https://api.buypass.com/acme/directory" + account key "/etc/acme/buypass-privkey.pem" + contact "mailto:me@example.com" +} + +authority buypass-test { + api url "https://api.test4.buypass.no/acme/directory" + account key "/etc/acme/buypass-test-privkey.pem" + contact "mailto:me@example.com" +} + +domain buetow.org { + alternative names { www.buetow.org paul.buetow.org } + domain key "/etc/ssl/private/buetow.org.key" + domain full chain certificate "/etc/ssl/buetow.org.fullchain.pem" + sign with letsencrypt +} + +domain dtail.dev { + alternative names { www.dtail.dev } + domain key "/etc/ssl/private/dtail.dev.key" + domain full chain certificate "/etc/ssl/dtail.dev.fullchain.pem" + sign with letsencrypt +} + +domain foo.zone { + alternative names { www.foo.zone } + domain key "/etc/ssl/private/foo.zone.key" + domain full chain certificate "/etc/ssl/foo.zone.fullchain.pem" + sign with letsencrypt +} + +domain irregular.ninja { + alternative names { www.irregular.ninja } + domain key "/etc/ssl/private/irregular.ninja.key" + domain full chain certificate "/etc/ssl/irregular.ninja.fullchain.pem" + sign with letsencrypt +} + +domain snonux.land { + alternative names { www.snonux.land } + domain key "/etc/ssl/private/snonux.land.key" + domain full chain certificate "/etc/ssl/snonux.land.fullchain.pem" + sign with letsencrypt +} +``` + +### httpd.conf + +For ACME to work, you will need to configure the HTTP daemon so that the "special" ACME requests from Let's Encrypt are served correctly. I am using the standard OpenBSD `httpd` here. These are the snippets I use for the `foo.zone` host in `/etc/httpd.conf` (of course, you need a similar setup for all other hosts as well): + +``` +server "foo.zone" { + listen on * port 80 + location "/.well-known/acme-challenge/*" { + root "/acme" + request strip 2 + } + location * { + block return 302 "https://$HTTP_HOST$REQUEST_URI" + } +} + +server "foo.zone" { + listen on * tls port 443 + tls { + certificate "/etc/ssl/foo.zone.fullchain.pem" + key "/etc/ssl/private/foo.zone.key" + } + location * { + root "/htdocs/gemtexter/foo.zone" + directory auto index + } +} +``` + +As you see, plain HTTP only serves the ACME challenge path. Otherwise, it redirects the requests to TLS. The TLS section then attempts to use the Let's Encrypt certificates. + +It is worth noticing that `httpd` will start without the certificates being present. This will cause a certificate error when you try to reach the HTTPS endpoint, but it helps to bootstrap Let's Encrypt. As you saw in the config snippet above, Let's Encrypt only requests the plain HTTP endpoint for the verification process, so HTTPS doesn't need to be operational yet at this stage. But once the certificates are generated, you will have to reload or restart `httpd` to use any new certificate. + +### CRON job + +You could now run `doas acme-client foo.zone` to generate the certificate or to renew it. Or you could automate it with CRON. + +I have created a script `/usr/local/bin/acme.sh` for that for all of my domains: + +``` +#!/bin/sh + +function handle_cert { + host=$1 + # Create symlink, so that relayd also can read it. + crt_path=/etc/ssl/$host + if [ -e $crt_path.crt ]; then + rm $crt_path.crt + fi + ln -s $crt_path.fullchain.pem $crt_path.crt + # Requesting and renewing certificate. + /usr/sbin/acme-client -v $host +} + +has_update=no +handle_cert www.buetow.org +if [ $? -eq 0 ]; then + has_update=yes +fi +handle_cert www.paul.buetow.org +if [ $? -eq 0 ]; then + has_update=yes +fi +handle_cert www.tmp.buetow.org +if [ $? -eq 0 ]; then + has_update=yes +fi +handle_cert www.dtail.dev +if [ $? -eq 0 ]; then + has_update=yes +fi +handle_cert www.foo.zone +if [ $? -eq 0 ]; then + has_update=yes +fi +handle_cert www.irregular.ninja +if [ $? -eq 0 ]; then + has_update=yes +fi +handle_cert www.snonux.land +if [ $? -eq 0 ]; then + has_update=yes +fi + +# Pick up the new certs. +if [ $has_update = yes ]; then + /usr/sbin/rcctl reload httpd + /usr/sbin/rcctl reload relayd + /usr/sbin/rcctl restart smtpd +fi +``` + +And added the following line to `/etc/daily.local` to run the script once daily so that certificates will be renewed fully automatically: + +``` +/usr/local/bin/acme.sh +``` + +I am receiving a daily output via E-Mail like this now: + +``` +Running daily.local: +acme-client: /etc/ssl/buetow.org.fullchain.pem: certificate valid: 80 days left +acme-client: /etc/ssl/paul.buetow.org.fullchain.pem: certificate valid: 80 days left +acme-client: /etc/ssl/tmp.buetow.org.fullchain.pem: certificate valid: 80 days left +acme-client: /etc/ssl/dtail.dev.fullchain.pem: certificate valid: 80 days left +acme-client: /etc/ssl/foo.zone.fullchain.pem: certificate valid: 80 days left +acme-client: /etc/ssl/irregular.ninja.fullchain.pem: certificate valid: 80 days left +acme-client: /etc/ssl/snonux.land.fullchain.pem: certificate valid: 79 days left +``` + +## relayd.conf and smtpd.conf + +Besides `httpd`, `relayd` (mainly for Gemini) and `smtpd` (for mail, of course) also use TLS certificates. And as you can see in `acme.sh`, the services are reloaded or restarted (`smtpd` doesn't support reload) whenever a certificate is generated or updated. + +## Rexification + +I didn't write all these configuration files by hand. As a matter of fact, everything is automated with the Rex configuration management system. + +=> https://www.rexify.org + +At the top of the `Rexfile` I define all my hosts: + +``` +our @acme_hosts = qw/buetow.org paul.buetow.org tmp.buetow.org dtail.dev foo.zone irregular.ninja snonux.land/; +``` + +### General ACME client configuration + +ACME will be installed into the frontend group of hosts. Here, blowfish is the primary, and twofish is the secondary OpenBSD box. + +``` +group frontends => 'blowfish.buetow.org', 'twofish.buetow.org'; +``` + +This is my Rex task for the general ACME configuration: + +``` +desc 'Configure ACME client'; +task 'acme', group => 'frontends', + sub { + file '/etc/acme-client.conf', + content => template('./etc/acme-client.conf.tpl', + acme_hosts => \@acme_hosts, + is_primary => $is_primary), + owner => 'root', + group => 'wheel', + mode => '644'; + + file '/usr/local/bin/acme.sh', + content => template('./scripts/acme.sh.tpl', + acme_hosts => \@acme_hosts, + is_primary => $is_primary), + owner => 'root', + group => 'wheel', + mode => '744'; + + file '/etc/daily.local', + ensure => 'present', + owner => 'root', + group => 'wheel', + mode => '644'; + + append_if_no_such_line '/etc/daily.local', '/usr/local/bin/acme.sh'; + }; +``` + +And there is also a Rex task just to run the ACME script remotely: + +``` +desc 'Invoke ACME client'; +task 'acme_invoke', group => 'frontends', + sub { + say run '/usr/local/bin/acme.sh'; + }; + +``` + +Furthermore, this snippet (also at the top of the Rexfile) helps to determine whether the current server is the primary server (all hosts will be without the `www.` prefix) or the secondary server (all hosts will be with the `www.` prefix): + +``` +# Bootstrapping the FQDN based on the server IP as the hostname and domain +# facts aren't set yet due to the myname file in the first place. +our $fqdns = sub { + my $ipv4 = shift; + return 'blowfish.buetow.org' if $ipv4 eq '23.88.35.144'; + return 'twofish.buetow.org' if $ipv4 eq '108.160.134.135'; + Rex::Logger::info("Unable to determine hostname for $ipv4", 'error'); + return 'HOSTNAME-UNKNOWN.buetow.org'; +}; + +# To determine whether the server is the primary or the secondary. +our $is_primary = sub { + my $ipv4 = shift; + $fqdns->($ipv4) eq 'blowfish.buetow.org'; +}; +``` + +The following is the `acme-client.conf.tpl` Rex template file used for the automation. You see that the `www.` prefix isn't sent for the primary server. E.g. `foo.zone` will be served by the primary server (in my case, a server located in Germany) and `www.foo.zone` by the secondary server (in my case, a server located in Japan): + +``` +# +# $OpenBSD: acme-client.conf,v 1.4 2020/09/17 09:13:06 florian Exp $ +# +authority letsencrypt { + api url "https://acme-v02.api.letsencrypt.org/directory" + account key "/etc/acme/letsencrypt-privkey.pem" +} + +authority letsencrypt-staging { + api url "https://acme-staging-v02.api.letsencrypt.org/directory" + account key "/etc/acme/letsencrypt-staging-privkey.pem" +} + +authority buypass { + api url "https://api.buypass.com/acme/directory" + account key "/etc/acme/buypass-privkey.pem" + contact "mailto:me@example.com" +} + +authority buypass-test { + api url "https://api.test4.buypass.no/acme/directory" + account key "/etc/acme/buypass-test-privkey.pem" + contact "mailto:me@example.com" +} + +<% + our $primary = $is_primary->($vio0_ip); + our $prefix = $primary ? '' : 'www.'; +%> + +<% for my $host (@$acme_hosts) { %> +domain <%= $prefix.$host %> { + domain key "/etc/ssl/private/<%= $prefix.$host %>.key" + domain full chain certificate "/etc/ssl/<%= $prefix.$host %>.fullchain.pem" + sign with letsencrypt +} +<% } %> + +``` + +And this is the `acme.sh.tpl`: + +``` +#!/bin/sh + +<% + our $primary = $is_primary->($vio0_ip); + our $prefix = $primary ? '' : 'www.'; +-%> + +function handle_cert { + host=$1 + # Create symlink, so that relayd also can read it. + crt_path=/etc/ssl/$host + if [ -e $crt_path.crt ]; then + rm $crt_path.crt + fi + ln -s $crt_path.fullchain.pem $crt_path.crt + # Requesting and renewing certificate. + /usr/sbin/acme-client -v $host +} + +has_update=no +<% for my $host (@$acme_hosts) { -%> +handle_cert <%= $prefix.$host %> +if [ $? -eq 0 ]; then + has_update=yes +fi +<% } -%> + +# Pick up the new certs. +if [ $has_update = yes ]; then + /usr/sbin/rcctl reload httpd + /usr/sbin/rcctl reload relayd + /usr/sbin/rcctl restart smtpd +fi +``` + +### Service rexification + +These are the Rex tasks setting up `httpd`, `relayd` and `smtpd` services: + +``` +desc 'Setup httpd'; +task 'httpd', group => 'frontends', + sub { + append_if_no_such_line '/etc/rc.conf.local', 'httpd_flags='; + + file '/etc/httpd.conf', + content => template('./etc/httpd.conf.tpl', + acme_hosts => \@acme_hosts, + is_primary => $is_primary), + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { service 'httpd' => 'restart' }; + + service 'httpd', ensure => 'started'; + }; + +desc 'Setup relayd'; +task 'relayd', group => 'frontends', + sub { + append_if_no_such_line '/etc/rc.conf.local', 'relayd_flags='; + + file '/etc/relayd.conf', + content => template('./etc/relayd.conf.tpl', + ipv6address => $ipv6address, + is_primary => $is_primary), + owner => 'root', + group => 'wheel', + mode => '600', + on_change => sub { service 'relayd' => 'restart' }; + + service 'relayd', ensure => 'started'; + }; + +desc 'Setup OpenSMTPD'; +task 'smtpd', group => 'frontends', + sub { + Rex::Logger::info('Dealing with mail aliases'); + file '/etc/mail/aliases', + source => './etc/mail/aliases', + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { say run 'newaliases' }; + + Rex::Logger::info('Dealing with mail virtual domains'); + file '/etc/mail/virtualdomains', + source => './etc/mail/virtualdomains', + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { service 'smtpd' => 'restart' }; + + Rex::Logger::info('Dealing with mail virtual users'); + file '/etc/mail/virtualusers', + source => './etc/mail/virtualusers', + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { service 'smtpd' => 'restart' }; + + Rex::Logger::info('Dealing with smtpd.conf'); + file '/etc/mail/smtpd.conf', + content => template('./etc/mail/smtpd.conf.tpl', + is_primary => $is_primary), + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { service 'smtpd' => 'restart' }; + + service 'smtpd', ensure => 'started'; + }; + +``` + +This is the `httpd.conf.tpl`: + +``` +<% + our $primary = $is_primary->($vio0_ip); + our $prefix = $primary ? '' : 'www.'; +%> + +# Plain HTTP for ACME and HTTPS redirect +<% for my $host (@$acme_hosts) { %> +server "<%= $prefix.$host %>" { + listen on * port 80 + location "/.well-known/acme-challenge/*" { + root "/acme" + request strip 2 + } + location * { + block return 302 "https://$HTTP_HOST$REQUEST_URI" + } +} +<% } %> + +# Gemtexter hosts +<% for my $host (qw/foo.zone snonux.land/) { %> +server "<%= $prefix.$host %>" { + listen on * tls port 443 + tls { + certificate "/etc/ssl/<%= $prefix.$host %>.fullchain.pem" + key "/etc/ssl/private/<%= $prefix.$host %>.key" + } + location * { + root "/htdocs/gemtexter/<%= $host %>" + directory auto index + } +} +<% } %> + +# DTail special host +server "<%= $prefix %>dtail.dev" { + listen on * tls port 443 + tls { + certificate "/etc/ssl/<%= $prefix %>dtail.dev.fullchain.pem" + key "/etc/ssl/private/<%= $prefix %>dtail.dev.key" + } + location * { + block return 302 "https://github.dtail.dev$REQUEST_URI" + } +} + +# Irregular Ninja special host +server "<%= $prefix %>irregular.ninja" { + listen on * tls port 443 + tls { + certificate "/etc/ssl/<%= $prefix %>irregular.ninja.fullchain.pem" + key "/etc/ssl/private/<%= $prefix %>irregular.ninja.key" + } + location * { + root "/htdocs/irregular.ninja" + directory auto index + } +} + +# buetow.org special host. +server "<%= $prefix %>buetow.org" { + listen on * tls port 443 + tls { + certificate "/etc/ssl/<%= $prefix %>buetow.org.fullchain.pem" + key "/etc/ssl/private/<%= $prefix %>buetow.org.key" + } + block return 302 "https://paul.buetow.org" +} + +server "<%= $prefix %>paul.buetow.org" { + listen on * tls port 443 + tls { + certificate "/etc/ssl/<%= $prefix %>paul.buetow.org.fullchain.pem" + key "/etc/ssl/private/<%= $prefix %>paul.buetow.org.key" + } + block return 302 "https://foo.zone/contact-information.html" +} + +server "<%= $prefix %>tmp.buetow.org" { + listen on * tls port 443 + tls { + certificate "/etc/ssl/<%= $prefix %>tmp.buetow.org.fullchain.pem" + key "/etc/ssl/private/<%= $prefix %>tmp.buetow.org.key" + } + root "/htdocs/buetow.org/tmp" + directory auto index +} +``` + +and this the `relayd.conf.tpl`: + +``` +<% + our $primary = $is_primary->($vio0_ip); + our $prefix = $primary ? '' : 'www.'; +%> + +log connection + +tcp protocol "gemini" { + tls keypair <%= $prefix %>foo.zone + tls keypair <%= $prefix %>buetow.org +} + +relay "gemini4" { + listen on <%= $vio0_ip %> port 1965 tls + protocol "gemini" + forward to 127.0.0.1 port 11965 +} + +relay "gemini6" { + listen on <%= $ipv6address->($hostname) %> port 1965 tls + protocol "gemini" + forward to 127.0.0.1 port 11965 +} +``` + +And last but not least, this is the `smtpd.conf.tpl`: + +``` +<% + our $primary = $is_primary->($vio0_ip); + our $prefix = $primary ? '' : 'www.'; +%> + +pki "buetow_org_tls" cert "/etc/ssl/<%= $prefix %>buetow.org.fullchain.pem" +pki "buetow_org_tls" key "/etc/ssl/private/<%= $prefix %>buetow.org.key" + +table aliases file:/etc/mail/aliases +table virtualdomains file:/etc/mail/virtualdomains +table virtualusers file:/etc/mail/virtualusers + +listen on socket +listen on all tls pki "buetow_org_tls" hostname "<%= $prefix %>buetow.org" +#listen on all + +action localmail mbox alias <aliases> +action receive mbox virtual <virtualusers> +action outbound relay + +match from any for domain <virtualdomains> action receive +match from local for local action localmail +match from local for any action outbound +``` + +## All pieces together + +For the complete `Rexfile` example and all the templates, please look at the Git repository: + +=> https://codeberg.org/snonux/rexfiles + +Besides ACME, other things, such as DNS servers, are also rexified. The following command will run all the Rex tasks and configure everything on my frontend machines automatically: + +``` +rex commons +``` + +The `commons` is a group of tasks I specified which combines a set of common tasks I always want to execute on all frontend machines. This also includes the ACME tasks mentioned in this article! + +## Conclusion + +ACME and Let's Encrypt greatly help reduce recurring manual maintenance work (creating and renewing certificates). Furthermore, all the certificates are free of cost! I love to use OpenBSD and Rex to automate all of this. + +OpenBSD suits perfectly here as all the tools are already part of the base installation. But I like underdogs. Rex is not as powerful and popular as other configuration management systems (e.g. Puppet, Chef, SALT or even Ansible). It is more of an underdog, and the community is small. + +Why re-inventing the wheel? I love that a `Rexfile` is just a Perl DSL. Also, OpenBSD comes with Perl in the base system. So no new programming language had to be added to my mix for the configuration management system. Also, the `acme.sh` shell script is not a Bash but a standard Bourne shell script, so I didn't have to install an additional shell as OpenBSD does not come with the Bash pre-installed. + +Other *BSD related posts are: + +<< template::inline::index bsd + +E-Mail your comments to `paul@nospam.buetow.org` :-) + +=> ../ Back to the main site diff --git a/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi b/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi new file mode 100644 index 00000000..a29ec709 --- /dev/null +++ b/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi @@ -0,0 +1,71 @@ +# One reason why I love OpenBSD + +> Published at 2024-01-13T22:55:33+02:00 + +``` + . + A ; + | ,--,-/ \---,-/| , + _|\,'. /| /| `/|-. + \`.' /| , `;. + ,'\ A A A A _ /| `.; + ,/ _ A _ / _ /| ; + /\ / \ , , A / / `/| + /_| | _ \ , , ,/ \ + // | |/ `.\ ,- , , ,/ ,/ \/ + / @| |@ / /' \ \ , > /| ,--. + |\_/ \_/ / | | , ,/ \ ./' __:.. + | __ __ | | | .--. , > > |-' / ` + ,/| / ' \ | | | \ , | / + / |<--.__,->| | | . `. > > / ( +/_,' \\ ^ / \ / / `. >-- /^\ | + \\___/ \ / / \__' \ \ \/ \ | + `. |/ , , /`\ \ ) + \ ' |/ , V \ / `-\ + `|/ ' V V \ \.' \_ + '`-. V V \./'\ + `|/-. \ / \ /,---`\ kat + n + / `._____V_____V' + ' ' +``` + +I just upgraded my OpenBSD's from `7.3` to `7.4` by following the unattended upgrade guide: + +=> https://www.openbsd.org/faq/upgrade74.html + +```shell +doas installboot sd0 # Update the bootloader (not for every upgrade required) +doas sysupgrade # Update all binaries (including Kernel) +``` + +`sysupgrade` downloaded and upgraded to the next release and rebooted the system. After the reboot, I run: + +```shell +doas sysmerge # Update system configuration files +doas pkg_add -u # Update all packages +doas reboot # Just in case, reboot one more time +``` + +That's it! Took me around 5 minutes in total! No issues, only these few comands, only 5 minutes! It just works! No problems, no conflicts, no tons (actually none) config file merge conflicts. + +I followed the same procedure the previous times and never encountered any difficulties with any OpenBSD upgrades. + +I have seen upgrades of other Operating Systems either take a long time or break the system (which takes manual steps to repair). That's just one of many reasons why I love OpenBSD! There appear never to be any problems. It just gets its job done! + +=> https://www.openbsd.org The OpenBSD Project + +BTW: are you looking for an opinionated OpenBSD VM hoster? OpenBSD Amsterdam may be for you. They rock (I am having a VM there, too)! + +=> https://openbsd.amsterdam + +Other *BSD related posts are: + +=> ./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.gmi 2016-04-09 Jails and ZFS with Puppet on FreeBSD +=> ./2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi 2022-07-30 Let's Encrypt with OpenBSD and Rex +=> ./2022-10-30-installing-dtail-on-openbsd.gmi 2022-10-30 Installing DTail on OpenBSD +=> ./2024-01-13-one-reason-why-i-love-openbsd.gmi 2024-01-13 One reason why I love OpenBSD (You are currently reading this) + +E-Mail your comments to `paul@nospam.buetow.org` :-) + +=> ../ Back to the main site diff --git a/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi.tpl b/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi.tpl new file mode 100644 index 00000000..1d9c71a1 --- /dev/null +++ b/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi.tpl @@ -0,0 +1,68 @@ +# One reason why I love OpenBSD + +> Published at 2024-01-13T22:55:33+02:00 + +``` + . + A ; + | ,--,-/ \---,-/| , + _|\,'. /| /| `/|-. + \`.' /| , `;. + ,'\ A A A A _ /| `.; + ,/ _ A _ / _ /| ; + /\ / \ , , A / / `/| + /_| | _ \ , , ,/ \ + // | |/ `.\ ,- , , ,/ ,/ \/ + / @| |@ / /' \ \ , > /| ,--. + |\_/ \_/ / | | , ,/ \ ./' __:.. + | __ __ | | | .--. , > > |-' / ` + ,/| / ' \ | | | \ , | / + / |<--.__,->| | | . `. > > / ( +/_,' \\ ^ / \ / / `. >-- /^\ | + \\___/ \ / / \__' \ \ \/ \ | + `. |/ , , /`\ \ ) + \ ' |/ , V \ / `-\ + `|/ ' V V \ \.' \_ + '`-. V V \./'\ + `|/-. \ / \ /,---`\ kat + n + / `._____V_____V' + ' ' +``` + +I just upgraded my OpenBSD's from `7.3` to `7.4` by following the unattended upgrade guide: + +=> https://www.openbsd.org/faq/upgrade74.html + +```shell +doas installboot sd0 # Update the bootloader (not for every upgrade required) +doas sysupgrade # Update all binaries (including Kernel) +``` + +`sysupgrade` downloaded and upgraded to the next release and rebooted the system. After the reboot, I run: + +```shell +doas sysmerge # Update system configuration files +doas pkg_add -u # Update all packages +doas reboot # Just in case, reboot one more time +``` + +That's it! Took me around 5 minutes in total! No issues, only these few comands, only 5 minutes! It just works! No problems, no conflicts, no tons (actually none) config file merge conflicts. + +I followed the same procedure the previous times and never encountered any difficulties with any OpenBSD upgrades. + +I have seen upgrades of other Operating Systems either take a long time or break the system (which takes manual steps to repair). That's just one of many reasons why I love OpenBSD! There appear never to be any problems. It just gets its job done! + +=> https://www.openbsd.org The OpenBSD Project + +BTW: are you looking for an opinionated OpenBSD VM hoster? OpenBSD Amsterdam may be for you. They rock (I am having a VM there, too)! + +=> https://openbsd.amsterdam + +Other *BSD related posts are: + +<< template::inline::index bsd + +E-Mail your comments to `paul@nospam.buetow.org` :-) + +=> ../ Back to the main site diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml index e22d2d0f..1ad8513c 100644 --- a/gemfeed/atom.xml +++ b/gemfeed/atom.xml @@ -1,12 +1,104 @@ <?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom"> - <updated>2024-01-09T18:45:17+02:00</updated> + <updated>2024-01-13T23:06:21+02:00</updated> <title>foo.zone feed</title> <subtitle>To be in the .zone!</subtitle> <link href="gemini://foo.zone/gemfeed/atom.xml" rel="self" /> <link href="gemini://foo.zone/" /> <id>gemini://foo.zone/</id> <entry> + <title>One reason why I love OpenBSD</title> + <link href="gemini://foo.zone/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi" /> + <id>gemini://foo.zone/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi</id> + <updated>2024-01-13T22:55:33+02:00</updated> + <author> + <name>Paul Buetow aka snonux</name> + <email>paul@dev.buetow.org</email> + </author> + <summary>I just upgraded my OpenBSD's from `7.3` to `7.4` by following the unattended upgrade guide:</summary> + <content type="xhtml"> + <div xmlns="http://www.w3.org/1999/xhtml"> + <h1 style='display: inline'>One reason why I love OpenBSD</h1><br /> +<br /> +<span class='quote'>Published at 2024-01-13T22:55:33+02:00</span><br /> +<br /> +<pre> + . + A ; + | ,--,-/ \---,-/| , + _|\,'. /| /| `/|-. + \`.' /| , `;. + ,'\ A A A A _ /| `.; + ,/ _ A _ / _ /| ; + /\ / \ , , A / / `/| + /_| | _ \ , , ,/ \ + // | |/ `.\ ,- , , ,/ ,/ \/ + / @| |@ / /' \ \ , > /| ,--. + |\_/ \_/ / | | , ,/ \ ./' __:.. + | __ __ | | | .--. , > > |-' / ` + ,/| / ' \ | | | \ , | / + / |<--.__,->| | | . `. > > / ( +/_,' \\ ^ / \ / / `. >-- /^\ | + \\___/ \ / / \__' \ \ \/ \ | + `. |/ , , /`\ \ ) + \ ' |/ , V \ / `-\ + `|/ ' V V \ \.' \_ + '`-. V V \./'\ + `|/-. \ / \ /,---`\ kat + n + / `._____V_____V' + ' ' +</pre> +<br /> +<span>I just upgraded my OpenBSD's from <span class='inlinecode'>7.3</span> to <span class='inlinecode'>7.4</span> by following the unattended upgrade guide:</span><br /> +<br /> +<a class='textlink' href='https://www.openbsd.org/faq/upgrade74.html'>https://www.openbsd.org/faq/upgrade74.html</a><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>doas installboot sd0 <i><font color="#9A1900"># Update the bootloader (not for every upgrade required)</font></i> +doas sysupgrade <i><font color="#9A1900"># Update all binaries (including Kernel)</font></i> +</pre> +<br /> +<span><span class='inlinecode'>sysupgrade</span> downloaded and upgraded to the next release and rebooted the system. After the reboot, I run:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>doas sysmerge <i><font color="#9A1900"># Update system configuration files</font></i> +doas pkg_add -u <i><font color="#9A1900"># Update all packages</font></i> +doas reboot <i><font color="#9A1900"># Just in case, reboot one more time</font></i> +</pre> +<br /> +<span>That's it! Took me around 5 minutes in total! No issues, only these few comands, only 5 minutes! It just works! No problems, no conflicts, no tons (actually none) config file merge conflicts.</span><br /> +<br /> +<span>I followed the same procedure the previous times and never encountered any difficulties with any OpenBSD upgrades.</span><br /> +<br /> +<span>I have seen upgrades of other Operating Systems either take a long time or break the system (which takes manual steps to repair). That's just one of many reasons why I love OpenBSD! There appear never to be any problems. It just gets its job done!</span><br /> +<br /> +<a class='textlink' href='https://www.openbsd.org'>The OpenBSD Project</a><br /> +<br /> +<span>BTW: are you looking for an opinionated OpenBSD VM hoster? OpenBSD Amsterdam may be for you. They rock (I am having a VM there, too)!</span><br /> +<br /> +<a class='textlink' href='https://openbsd.amsterdam'>https://openbsd.amsterdam</a><br /> +<br /> +<span>Other *BSD related posts are:</span><br /> +<br /> +<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br /> +<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let's Encrypt with OpenBSD and Rex</a><br /> +<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br /> +<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD (You are currently reading this)</a><br /> +<br /> +<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br /> +<br /> +<a class='textlink' href='../'>Back to the main site</a><br /> + </div> + </content> + </entry> + <entry> <title>Site Reliability Engineering - Part 3: On-Call Culture and the Human Aspect</title> <link href="gemini://foo.zone/gemfeed/2024-01-09-site-reliability-engineering-part-3.gmi" /> <id>gemini://foo.zone/gemfeed/2024-01-09-site-reliability-engineering-part-3.gmi</id> @@ -4899,6 +4991,13 @@ rex commons <br /> <span>Why re-inventing the wheel? I love that a <span class='inlinecode'>Rexfile</span> is just a Perl DSL. Also, OpenBSD comes with Perl in the base system. So no new programming language had to be added to my mix for the configuration management system. Also, the <span class='inlinecode'>acme.sh</span> shell script is not a Bash but a standard Bourne shell script, so I didn't have to install an additional shell as OpenBSD does not come with the Bash pre-installed.</span><br /> <br /> +<span>Other *BSD related posts are:</span><br /> +<br /> +<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br /> +<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let's Encrypt with OpenBSD and Rex (You are currently reading this)</a><br /> +<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br /> +<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br /> +<br /> <span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br /> <br /> <a class='textlink' href='../'>Back to the main site</a><br /> @@ -8776,210 +8875,4 @@ dtail –servers serverlist.txt –files ‘/var/log/*.log’ –regex ‘(?i:er </div> </content> </entry> - <entry> - <title>Realistic load testing with I/O Riot for Linux</title> - <link href="gemini://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi" /> - <id>gemini://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi</id> - <updated>2018-06-01T14:50:29+01:00</updated> - <author> - <name>Paul Buetow aka snonux</name> - <email>paul@dev.buetow.org</email> - </author> - <summary>This text first was published in the german IT-Administrator computer Magazine. 3 years have passed since and I decided to publish it on my blog too. </summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1 style='display: inline'>Realistic load testing with I/O Riot for Linux</h1><br /> -<br /> -<span class='quote'>Published at 2018-06-01T14:50:29+01:00; Updated at 2021-05-08</span><br /> -<br /> -<pre> - .---. - / \ - \.@-@./ - /`\_/`\ - // _ \\ - | \ )|_ - /`\_`> <_/ \ -jgs\__/'---'\__/ -</pre> -<br /> -<h2 style='display: inline'>Foreword</h2><br /> -<br /> -<span>This text first was published in the german IT-Administrator computer Magazine. 3 years have passed since and I decided to publish it on my blog too. </span><br /> -<br /> -<a class='textlink' href='https://www.admin-magazin.de/Das-Heft/2018/06/Realistische-Lasttests-mit-I-O-Riot'>https://www.admin-magazin.de/Das-Heft/2018/06/Realistische-Lasttests-mit-I-O-Riot</a><br /> -<br /> -<span>I havn't worked on I/O Riot for some time now, but all what is written here is still valid. I am still using I/O Riot to debug I/O issues and pattern once in a while, so by all means the tool is not obsolete yet. The tool even helped to resolve a major production incident at work caused by disk I/O.</span><br /> -<br /> -<span>I am eagerly looking forward to revamp I/O Riot so that it uses the new BPF Linux capabilities instead of plain old Systemtap (or alternatively: Newer versions of Systemtap can also use BPF as the backend I have learned). Also, when I wrote I/O Riot initially, I didn't have any experience with the Go programming language yet and therefore I wrote it in C. Once it gets revamped I might consider using Go instead of C as it would spare me from many segmentation faults and headaches during development ;-). I might also just stick to C for plain performance reasons and just refactor the code dealing with concurrency.</span><br /> -<br /> -<span>Pleace notice that some of the screenshots show the command "ioreplay" instead of "ioriot". That's because the name has changed after taking those.</span><br /> -<br /> -<h1 style='display: inline'>The article</h1><br /> -<br /> -<span>With I/O Riot IT administrators can load test and optimize the I/O subsystem of Linux-based operating systems. The tool makes it possible to record I/O patterns and replay them at a later time as often as desired. This means bottlenecks can be reproduced and eradicated. </span><br /> -<br /> -<span>When storing huge amounts of data, such as more than 200 billion archived emails at Mimecast, it's not only the available storage capacity that matters, but also the data throughput and latency. At the same time, operating costs must be kept as low as possible. The more systems involved, the more important it is to optimize the hardware, the operating system and the applications running on it.</span><br /> -<br /> -<h2 style='display: inline'>Background: Existing Techniques</h2><br /> -<br /> -<span>Conventional I/O benchmarking: Administrators usually use open source benchmarking tools like IOZone and bonnie++. Available database systems such as Redis and MySQL come with their own benchmarking tools. The common problem with these tools is that they work with prescribed artificial I/O patterns. Although this can test both sequential and randomized data access, the patterns do not correspond to what can be found on production systems.</span><br /> -<br /> -<span>Testing by load test environment: Another option is to use a separate load test environment in which, as far as possible, a production environment with all its dependencies is simulated. However, an environment consisting of many microservices is very complex. Microservices are usually managed by different teams, which means extra coordination effort for each load test. Another challenge is to generate the load as authentically as possible so that the patterns correspond to a productive environment. Such a load test environment can only handle as many requests as its weakest link can handle. For example, load generators send many read and write requests to a frontend microservice, whereby the frontend forwards the requests to a backend microservice responsible for storing the data. If the frontend service does not process the requests efficiently enough, the backend service is not well utilized in the first place. As a rule, all microservices are clustered across many servers, which makes everything even more complicated. Under all these conditions it is very difficult to test I/O of separate backend systems. Moreover, for many small and medium-sized companies, a separate load test environment would not be feasible for cost reasons.</span><br /> -<br /> -<span>Testing in the production environment: For these reasons, benchmarks are often carried out in the production environment. In order to derive value from this such tests are especially performed during peak hours when systems are under high load. However, testing on production systems is associated with risks and can lead to failure or loss of data without adequate protection.</span><br /> -<br /> -<h2 style='display: inline'>Benchmarking the Email Cloud at Mimecast</h2><br /> -<br /> -<span>For email archiving, Mimecast uses an internally developed microservice, which is operated directly on Linux-based storage systems. A storage cluster is divided into several replication volumes. Data is always replicated three times across two secure data centers. Customer data is automatically allocated to one or more volumes, depending on throughput, so that all volumes are automatically assigned the same load. Customer data is archived on conventional, but inexpensive hard disks with several terabytes of storage capacity each. I/O benchmarking proved difficult for all the reasons mentioned above. Furthermore, there are no ready-made tools for this purpose in the case of self-developed software. The service operates on many block devices simultaneously, which can make the RAID controller a bottleneck. None of the freely available benchmarking tools can test several block devices at the same time without extra effort. In addition, emails typically consist of many small files. Randomized access to many small files is particularly inefficient. In addition to many software adaptations, the hardware and operating system must also be optimized.</span><br /> -<br /> -<span>Mimecast encourages employees to be innovative and pursue their own ideas in the form of an internal competition, Pet Project. The goal of the pet project I/O Riot was to simplify OS and hardware level I/O benchmarking. The first prototype of I/O Riot was awarded an internal roadmap prize in the spring of 2017. A few months later, I/O Riot was used to reduce write latency in the storage clusters by about 50%. The improvement was first verified by I/O replay on a test system and then successively applied to all storage systems. I/O Riot was also used to resolve a production incident caused by disk I/O load.</span><br /> -<br /> -<h2 style='display: inline'>Using I/O Riot</h2><br /> -<br /> -<span>First, all I/O events are logged to a file on a production system with I/O Riot. It is then copied to a test system where all events are replayed in the same way. The crucial point here is that you can reproduce I/O patterns as they are found on a production system as often as you like on a test system. This results in the possibility of optimizing the set screws on the system after each run.</span><br /> -<br /> -<h3 style='display: inline'>Installation</h3><br /> -<br /> -<span>I/O Riot was tested under CentOS 7.2 x86_64. For compiling, the GNU C compiler and Systemtap including kernel debug information are required. Other Linux distributions are theoretically compatible but untested. First of all, you should update the systems involved as follows:</span><br /> -<br /> -<pre> -% sudo yum update -</pre> -<br /> -<span>If the kernel is updated, please restart the system. The installation would be done without a restart but this would complicate the installation. The installed kernel version should always correspond to the currently running kernel. You can then install I/O Riot as follows:</span><br /> -<br /> -<pre> -% sudo yum install gcc git systemtap yum-utils kernel-devel-$(uname -r) -% sudo debuginfo-install kernel-$(uname -r) -% git clone https://github.com/mimecast/ioriot -% cd ioriot -% make -% sudo make install -% export PATH=$PATH:/opt/ioriot/bin -</pre> -<br /> -<span>Note: It is not best practice to install any compilers on production systems. For further information please have a look at the enclosed README.md.</span><br /> -<br /> -<h3 style='display: inline'>Recording of I/O events</h3><br /> -<br /> -<span>All I/O events are kernel related. If a process wants to perform an I/O operation, such as opening a file, it must inform the kernel of this by a system call (short syscall). I/O Riot relies on the Systemtap tool to record I/O syscalls. Systemtap, available for all popular Linux distributions, helps you to take a look at the running kernel in productive environments, which makes it predestined to monitor all I/O-relevant Linux syscalls and log them to a file. Other tools, such as strace, are not an alternative because they slow down the system too much.</span><br /> -<br /> -<span>During recording, ioriot acts as a wrapper and executes all relevant Systemtap commands for you. Use the following command to log all events to io.capture:</span><br /> -<br /> -<pre> -% sudo ioriot -c io.capture -</pre> -<br /> -<a href='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png'><img alt='Screenshot I/O recording' title='Screenshot I/O recording' src='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png' /></a><br /> -<br /> -<span>A Ctrl-C (SIGINT) stops recording prematurely. Otherwise, ioriot terminates itself automatically after 1 hour. Depending on the system load, the output file can grow to several gigabytes. Only metadata is logged, not the read and written data itself. When replaying later, only random data is used. Under certain circumstances, Systemtap may omit some system calls and issue warnings. This is to ensure that Systemtap does not consume too many resources.</span><br /> -<br /> -<h3 style='display: inline'>Test preparation</h3><br /> -<br /> -<span>Then copy io.capture to a test system. The log also contains all accesses to the pseudo file systems devfs, sysfs and procfs. This makes little sense, which is why you must first generate a cleaned and playable version io.replay from io.capture as follows:</span><br /> -<br /> -<pre> -% sudo ioriot -c io.capture -r io.replay -u $USER -n TESTNAME -</pre> -<br /> -<span>The parameter -n allows you to assign a freely selectable test name. An arbitrary system user under which the test is to be played is specified via paramater -u.</span><br /> -<br /> -<h3 style='display: inline'>Test Initialization</h3><br /> -<br /> -<span>The test will most likely want to access existing files. These are files the test wants to read but does not create by itself. The existence of these must be ensured before the test. You can do this as follows:</span><br /> -<br /> -<pre> -% sudo ioriot -i io.replay -</pre> -<br /> -<span>To avoid any damage to the running system, ioreplay only works in special directories. The tool creates a separate subdirectory for each file system mount point (e.g. /, /usr/local, /store/00,...) (here: /.ioriot/TESTNAME, /usr/local/.ioriot/TESTNAME, /store/00/.ioriot/TESTNAME,...). By default, the working directory of ioriot is /usr/local/ioriot/TESTNAME.</span><br /> -<br /> -<a href='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png'><img alt='Screenshot test preparation' title='Screenshot test preparation' src='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png' /></a><br /> -<br /> -<span>You must re-initialize the environment before each run. Data from previous tests will be moved to a trash directory automatically, which can be finally deleted with "sudo ioriot -P".</span><br /> -<br /> -<h3 style='display: inline'>Replay</h3><br /> -<br /> -<span>After initialization, you can replay the log with -r. You can use -R to initiate both test initialization and replay in a single command and -S can be used to specify a file in which statistics are written after the test run.</span><br /> -<br /> -<span>You can also influence the playback speed: "-s 0" is interpreted as "Playback as fast as possible" and is the default setting. With "-s 1" all operations are performed at original speed. "-s 2" would double the playback speed and "-s 0.5" would halve it.</span><br /> -<br /> -<a href='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure3-ioriot-replay.png'><img alt='Screenshot replaying I/O' title='Screenshot replaying I/O' src='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure3-ioriot-replay.png' /></a><br /> -<br /> -<span>As an initial test, for example, you could compare the two Linux I/O schedulers CFQ and Deadline and check which scheduler the test runs the fastest. They run the test separately for each scheduler. The following shell loop iterates through all attached block devices of the system and changes their I/O scheduler to the one specified in variable $new_scheduler (in this case either cfq or deadline). Subsequently, all I/O events from the io.replay protocol are played back. At the end, an output file with statistics is generated:</span><br /> -<br /> -<pre> -% new_scheduler=cfq -% for scheduler in /sys/block/*/queue/scheduler; do - echo $new_scheduler | sudo tee $scheduler -done -% sudo ioriot -R io.replay -S cfq.txt -% new_scheduler=deadline -% for scheduler in /sys/block/*/queue/scheduler; do - echo $new_scheduler | sudo tee $scheduler -done -% sudo ioriot -R io.replay -S deadline.txt -</pre> -<br /> -<span>According to the results, the test could run 940 seconds faster with Deadline Scheduler:</span><br /> -<br /> -<pre> -% cat cfq.txt -Num workers: 4 -hreads per worker: 128 -otal threads: 512 -Highest loadavg: 259.29 -Performed ioops: 218624596 -Average ioops/s: 101544.17 -Time ahead: 1452s -Total time: 2153.00s -% cat deadline.txt -Num workers: 4 -Threads per worker: 128 -Total threads: 512 -Highest loadavg: 342.45 -Performed ioops: 218624596 -Average ioops/s: 180234.62 -Time ahead: 2392s -Total time: 1213.00s -</pre> -<br /> -<span>In any case, you should also set up a time series database, such as Graphite, where the I/O throughput can be plotted. Figures 4 and 5 show the read and write access times of both tests. The break-in makes it clear when the CFQ test ended and the deadline test was started. The reading latency of both tests is similar. Write latency is dramatically improved using the Deadline Scheduler.</span><br /> -<br /> -<a href='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png'><img alt='Graphite visualization of the mean read access times in ms with CFQ and Deadline Scheduler.' title='Graphite visualization of the mean read access times in ms with CFQ and Deadline Scheduler.' src='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png' /></a><br /> -<br /> -<a href='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png'><img alt='Graphite visualization of the average write access times in ms with CFQ and Deadline Scheduler.' title='Graphite visualization of the average write access times in ms with CFQ and Deadline Scheduler.' src='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png' /></a><br /> -<br /> -<span>You should also take a look at the iostat tool. The iostat screenshot shows the output of iostat -x 10 during a test run. As you can see, a block device is fully loaded with 99% utilization, while all other block devices still have sufficient buffer. This could be an indication of poor data distribution in the storage system and is worth pursuing. It is not uncommon for I/O Riot to reveal software problems.</span><br /> -<br /> -<a href='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure6-iostat.png'><img alt='Output of iostat. The block device sdy seems to be almost fully utilized by 99%.' title='Output of iostat. The block device sdy seems to be almost fully utilized by 99%.' src='./2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure6-iostat.png' /></a><br /> -<br /> -<h2 style='display: inline'>I/O Riot is Open Source</h2><br /> -<br /> -<span>The tool has already proven to be very useful and will continue to be actively developed as time and priority permits. Mimecast intends to be an ongoing contributor to Open Source. You can find I/O Riot at:</span><br /> -<br /> -<a class='textlink' href='https://github.com/mimecast/ioriot'>https://github.com/mimecast/ioriot</a><br /> -<br /> -<h2 style='display: inline'>Systemtap</h2><br /> -<br /> -<span>Systemtap is a tool for the instrumentation of the Linux kernel. The tool provides an AWK-like programming language. Programs written in it are compiled from Systemtap to C- and then into a dynamically loadable kernel module. Loaded into the kernel, the program has access to Linux internals. A Systemtap program written for I/O Riot monitors when, with which parameters, at which time, and from which process I/O syscalls take place and their return values.</span><br /> -<br /> -<span>For example, the open syscall opens a file and returns the responsible file descriptor. The read and write syscalls can operate on a file descriptor and return the number of read or written bytes. The close syscall closes a given file descriptor. I/O Riot comes with a ready-made Systemtap program, which you have already compiled into a kernel module and installed to /opt/ioriot. In addition to open, read and close, it logs many other I/O-relevant calls.</span><br /> -<br /> -<a class='textlink' href='https://sourceware.org/systemtap/'>https://sourceware.org/systemtap/</a><br /> -<br /> -<h2 style='display: inline'>More refereces</h2><br /> -<br /> -<a class='textlink' href='http://www.iozone.org/'>IOZone</a><br /> -<a class='textlink' href='https://www.coker.com.au/bonnie++/'>Bonnie++</a><br /> -<a class='textlink' href='https://graphiteapp.org'>Graphite</a><br /> -<a class='textlink' href='https://en.wikipedia.org/wiki/Memory-mapped_I/O'>Memory mapped I/O</a><br /> -<br /> -<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br /> -<br /> -<a class='textlink' href='../'>Back to the main site</a><br /> - </div> - </content> - </entry> </feed> diff --git a/gemfeed/atom.xml.tmp b/gemfeed/atom.xml.tmp new file mode 100644 index 00000000..f603be57 --- /dev/null +++ b/gemfeed/atom.xml.tmp @@ -0,0 +1,655 @@ +<?xml version="1.0" encoding="utf-8"?> +<feed xmlns="http://www.w3.org/2005/Atom"> + <updated>2024-01-13T23:08:07+02:00</updated> + <title>foo.zone feed</title> + <subtitle>To be in the .zone!</subtitle> + <link href="gemini://foo.zone/gemfeed/atom.xml" rel="self" /> + <link href="gemini://foo.zone/" /> + <id>gemini://foo.zone/</id> + <entry> + <title>One reason why I love OpenBSD</title> + <link href="gemini://foo.zone/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi" /> + <id>gemini://foo.zone/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi</id> + <updated>2024-01-13T22:55:33+02:00</updated> + <author> + <name>Paul Buetow aka snonux</name> + <email>paul@dev.buetow.org</email> + </author> + <summary>I just upgraded my OpenBSD's from `7.3` to `7.4` by following the unattended upgrade guide:</summary> + <content type="xhtml"> + <div xmlns="http://www.w3.org/1999/xhtml"> + <h1 style='display: inline'>One reason why I love OpenBSD</h1><br /> +<br /> +<span class='quote'>Published at 2024-01-13T22:55:33+02:00</span><br /> +<br /> +<pre> + . + A ; + | ,--,-/ \---,-/| , + _|\,'. /| /| `/|-. + \`.' /| , `;. + ,'\ A A A A _ /| `.; + ,/ _ A _ / _ /| ; + /\ / \ , , A / / `/| + /_| | _ \ , , ,/ \ + // | |/ `.\ ,- , , ,/ ,/ \/ + / @| |@ / /' \ \ , > /| ,--. + |\_/ \_/ / | | , ,/ \ ./' __:.. + | __ __ | | | .--. , > > |-' / ` + ,/| / ' \ | | | \ , | / + / |<--.__,->| | | . `. > > / ( +/_,' \\ ^ / \ / / `. >-- /^\ | + \\___/ \ / / \__' \ \ \/ \ | + `. |/ , , /`\ \ ) + \ ' |/ , V \ / `-\ + `|/ ' V V \ \.' \_ + '`-. V V \./'\ + `|/-. \ / \ /,---`\ kat + n + / `._____V_____V' + ' ' +</pre> +<br /> +<span>I just upgraded my OpenBSD's from <span class='inlinecode'>7.3</span> to <span class='inlinecode'>7.4</span> by following the unattended upgrade guide:</span><br /> +<br /> +<a class='textlink' href='https://www.openbsd.org/faq/upgrade74.html'>https://www.openbsd.org/faq/upgrade74.html</a><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>doas installboot sd0 <i><font color="#9A1900"># Update the bootloader (not for every upgrade required)</font></i> +doas sysupgrade <i><font color="#9A1900"># Update all binaries (including Kernel)</font></i> +</pre> +<br /> +<span><span class='inlinecode'>sysupgrade</span> downloaded and upgraded to the next release and rebooted the system. After the reboot, I run:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>doas sysmerge <i><font color="#9A1900"># Update system configuration files</font></i> +doas pkg_add -u <i><font color="#9A1900"># Update all packages</font></i> +doas reboot <i><font color="#9A1900"># Just in case, reboot one more time</font></i> +</pre> +<br /> +<span>That's it! Took me around 5 minutes in total! No issues, only these few comands, only 5 minutes! It just works! No problems, no conflicts, no tons (actually none) config file merge conflicts.</span><br /> +<br /> +<span>I followed the same procedure the previous times and never encountered any difficulties with any OpenBSD upgrades.</span><br /> +<br /> +<span>I have seen upgrades of other Operating Systems either take a long time or break the system (which takes manual steps to repair). That's just one of many reasons why I love OpenBSD! There appear never to be any problems. It just gets its job done!</span><br /> +<br /> +<a class='textlink' href='https://www.openbsd.org'>The OpenBSD Project</a><br /> +<br /> +<span>BTW: are you looking for an opinionated OpenBSD VM hoster? OpenBSD Amsterdam may be for you. They rock (I am having a VM there, too)!</span><br /> +<br /> +<a class='textlink' href='https://openbsd.amsterdam'>https://openbsd.amsterdam</a><br /> +<br /> +<span>Other *BSD related posts are:</span><br /> +<br /> +<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br /> +<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let's Encrypt with OpenBSD and Rex</a><br /> +<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br /> +<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD (You are currently reading this)</a><br /> +<br /> +<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br /> +<br /> +<a class='textlink' href='../'>Back to the main site</a><br /> + </div> + </content> + </entry> + <entry> + <title>Site Reliability Engineering - Part 3: On-Call Culture and the Human Aspect</title> + <link href="gemini://foo.zone/gemfeed/2024-01-09-site-reliability-engineering-part-3.gmi" /> + <id>gemini://foo.zone/gemfeed/2024-01-09-site-reliability-engineering-part-3.gmi</id> + <updated>2024-01-09T18:35:48+02:00</updated> + <author> + <name>Paul Buetow aka snonux</name> + <email>paul@dev.buetow.org</email> + </author> + <summary>This is the third part of my Site Reliability Engineering (SRE) series. I am currently employed as a Site Reliability Engineer and will try to share what SRE is about in this blog series.</summary> + <content type="xhtml"> + <div xmlns="http://www.w3.org/1999/xhtml"> + <h1 style='display: inline'>Site Reliability Engineering - Part 3: On-Call Culture and the Human Aspect</h1><br /> +<br /> +<span class='quote'>Published at 2024-01-09T18:35:48+02:00</span><br /> +<br /> +<span>This is the third part of my Site Reliability Engineering (SRE) series. I am currently employed as a Site Reliability Engineer and will try to share what SRE is about in this blog series.</span><br /> +<br /> +<a class='textlink' href='./2023-08-18-site-reliability-engineering-part-1.html'>2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture</a><br /> +<a class='textlink' href='./2023-11-19-site-reliability-engineering-part-2.html'>2023-11-19 Site Reliability Engineering - Part 2: Operational Balance in SRE</a><br /> +<a class='textlink' href='./2024-01-09-site-reliability-engineering-part-3.html'>2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture and the Human Aspect (You are currently reading this)</a><br /> +<br /> +<pre> + ..--""""----.. + .-" ..--""""--.j-. + .-" .-" .--.""--.. + .-" .-" ..--"-. \/ ; + .-" .-"_.--..--"" ..--' "-. : + .' .' / `. \..--"" __ _ \ ; + :.__.-" \ / .' ( )"-. Y + ; ;: ( ) ( ). \ + .': /:: : \ \ + .'.-"\._ _.-" ; ; ( ) .-. ( ) \ + " `.""" .j" : : \ ; ; \ + bug /"""""/ ; ( ) "" :.( ) \ + /\ / : \ \`.: _ \ + : `. / ; `( ) (\/ :" \ \ + \ `. : "-.(_)_.' t-' ; + \ `. ; ..--": + `. `. : ..--"" : + `. "-. ; ..--"" ; + `. "-.:_..--"" ..--" + `. : ..--"" + "-. : ..--"" + "-.;_..--"" + +</pre> +<br /> +<h2 style='display: inline'>On-Call Culture and the Human Aspect: Prioritising Well-being in the Realm of Reliability</h2><br /> +<br /> +<span>Site Reliability Engineering is synonymous with ensuring system reliability, but the human factor is an often-underestimated part of this discipline. Ensuring an healthy on-call culture is as critical as any technical solution. The well-being of the engineers is an important factor.</span><br /> +<br /> +<span>Firstly, a healthy on-call rotation is about more than just managing and responding to incidents. It's about the entire ecosystem that supports this practice. This involves reducing pain points, offering mentorship, rapid iteration, and ensuring that engineers have the right tools and processes. One ceavat is, that engineers should be willing to learn. Especially in on-call rotation embedding SREs with other engineers (for example Software Engineers or QA Engineers), it's difficult to motivate everyone to engage. QA Engineers want to test the software, Software Engineers want to implement new features; they don't want to troubleshoot and debug production incidents. It can be depressing for the mentoring SRE.</span><br /> +<br /> +<span>Furthermore, the metrics that measure the success of an on-call experience are only sometimes straightforward. While one might assume that fewer pages translate to better on-call expertise (which is true to a degree, as who wants to receive a page out of office hours?), it's not always the volume of pages that matters most. Trust, ownership, accountability, and effective communication play the important roles.</span><br /> +<br /> +<span>An important part is giving feedback about the on-call experience to ensure continuous learning. If alerts are mostly noise, they should be tuned or even eliminated. If alerts are actionable, can recurring tasks be automated? If there are knowledge gaps, is the documentation not good enough? Continuous retrospection ensures that not only do systems evolve, but the experience for the on-call engineers becomes progressively better.</span><br /> +<br /> +<span>Onboarding for on-call duties is a crucial aspect of ensuring the reliability and efficiency of systems. This process involves equipping new team members with the knowledge, tools, and support to handle incidents confidently. It begins with an overview of the system architecture and common challenges, followed by training on monitoring tools, alerting mechanisms, and incident response protocols. Shadowing experienced on-call engineers can offer practical exposure. Too often, new engineers are thrown into the cold water without proper onboarding and training because the more experienced engineers are too busy fire-fighting production issues in the first place.</span><br /> +<br /> +<span>An always-on, always-alert culture can lead to burnout. Engineers should be encouraged to recognise their limits, take breaks, and seek support when needed. This isn't just about individual health; a burnt-out engineer can have cascading effects on the entire team and the systems they manage. A successful on-call culture ensures that while systems are kept running, the engineers are kept happy, healthy, and supported. The more experienced engineers should take time to mentor the junior engineers, but the junior engineers should also be fully engaged, try to investigate and learn new things by themselves.</span><br /> +<br /> +<span>For the junior engineer, it's too easy to fall back and ask the experts in the team every time an issue arises. This seems reasonable, but serving recipes for solving production issues on a silver tablet won't scale forever, as there are infinite scenarios of how production systems can break. So every engineer should learn to debug, troubleshoot and resolve production incidents independently. The experts will still be there for guidance and step in when the junior gets stuck after trying, but the experts should also learn to step down so that lesser experienced engineers can step up and learn. But mistakes can always happen here; that's why having a blameless on-call culture is essential.</span><br /> +<br /> +<span>A blameless on-call culture is a must for a safe and collaborative environment where engineers can effectively respond to incidents without fear of retribution. This approach acknowledges that mistakes are a natural part of the learning and innovation process. When individuals are assured they won't be punished for errors, they're more likely to openly discuss mistakes, allowing the entire team to learn and grow from each incident. Furthermore, a blameless culture promotes psychological safety, enhances job satisfaction, reduces burnout, and ensures that talent remains committed and engaged.</span><br /> +<br /> +<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br /> +<br /> +<a class='textlink' href='../'>Back to the main site</a><br /> + </div> + </content> + </entry> + <entry> + <title>Bash Golf Part 3</title> + <link href="gemini://foo.zone/gemfeed/2023-12-10-bash-golf-part-3.gmi" /> + <id>gemini://foo.zone/gemfeed/2023-12-10-bash-golf-part-3.gmi</id> + <updated>2023-12-10T11:35:54+02:00</updated> + <author> + <name>Paul Buetow aka snonux</name> + <email>paul@dev.buetow.org</email> + </author> + <summary>This is the third blog post about my Bash Golf series. This series is random Bash tips, tricks, and weirdnesses I have encountered over time. </summary> + <content type="xhtml"> + <div xmlns="http://www.w3.org/1999/xhtml"> + <h1 style='display: inline'>Bash Golf Part 3</h1><br /> +<br /> +<span class='quote'>Published at 2023-12-10T11:35:54+02:00</span><br /> +<br /> +<pre> + '\ '\ '\ . . |>18>> + \ \ \ . ' . | + O>> O>> O>> . 'o | + \ .\. .. .\. .. . | + /\ . /\ . /\ . . | + / / . / / .'. / / .' . | +jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + Art by Joan Stark, mod. by Paul Buetow +</pre> +<br /> +<span>This is the third blog post about my Bash Golf series. This series is random Bash tips, tricks, and weirdnesses I have encountered over time. </span><br /> +<br /> +<a class='textlink' href='./2021-11-29-bash-golf-part-1.html'>2021-11-29 Bash Golf Part 1</a><br /> +<a class='textlink' href='./2022-01-01-bash-golf-part-2.html'>2022-01-01 Bash Golf Part 2</a><br /> +<a class='textlink' href='./2023-12-10-bash-golf-part-3.html'>2023-12-10 Bash Golf Part 3 (You are currently reading this)</a><br /> +<br /> +<h2 style='display: inline'><span class='inlinecode'>FUNCNAME</span></h2><br /> +<br /> +<span><span class='inlinecode'>FUNCNAME</span> is an array you are looking for a way to dynamically determine the name of the current function (which could be considered the callee in the context of its own execution), you can use the special variable <span class='inlinecode'>FUNCNAME</span>. This is an array variable that contains the names of all shell functions currently in the execution call stack. The element <span class='inlinecode'>FUNCNAME[0]</span> holds the name of the currently executing function, <span class='inlinecode'>FUNCNAME[1]</span> the name of the function that called that, and so on.</span><br /> +<br /> +<span>This is particularly useful for logging when you want to include the callee function in the log output. E.g. look at this log helper:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><i><font color="#9A1900">#!/usr/bin/env bash</font></i> + +<b><font color="#000000">log ()</font></b> { + <b><font color="#0000FF">local</font></b> -r <font color="#009900">level</font><font color="#990000">=</font><font color="#FF0000">"$1"</font><font color="#990000">;</font> <b><font color="#0000FF">shift</font></b> + <b><font color="#0000FF">local</font></b> -r <font color="#009900">message</font><font color="#990000">=</font><font color="#FF0000">"$1"</font><font color="#990000">;</font> <b><font color="#0000FF">shift</font></b> + <b><font color="#0000FF">local</font></b> -i <font color="#009900">pid</font><font color="#990000">=</font><font color="#FF0000">"$$"</font> + + <b><font color="#0000FF">local</font></b> -r <font color="#009900">callee</font><font color="#990000">=</font><font color="#009900">${FUNCNAME[1]}</font> + <b><font color="#0000FF">local</font></b> -r <font color="#009900">stamp</font><font color="#990000">=</font><font color="#009900">$(</font>date <font color="#990000">+%</font>Y<font color="#990000">%</font>m<font color="#990000">%</font>d-<font color="#990000">%</font>H<font color="#990000">%</font>M<font color="#990000">%</font>S<font color="#990000">)</font> + + echo <font color="#FF0000">"$level|$stamp|$pid|$callee|$message"</font> <font color="#990000">>&</font><font color="#993399">2</font> +} + +<b><font color="#000000">at_home_friday_evening ()</font></b> { + log INFO <font color="#FF0000">'One Peperoni Pizza, please'</font> +} + +at_home_friday_evening +</pre> +<br /> +<span>The output is as follows:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>❯ <font color="#990000">.</font>/logexample<font color="#990000">.</font>sh +INFO<font color="#990000">|</font><font color="#993399">20231210</font>-<font color="#993399">082732</font><font color="#990000">|</font><font color="#993399">123002</font><font color="#990000">|</font>at_home_friday_evening<font color="#990000">|</font>One Peperoni Pizza<font color="#990000">,</font> please +</pre> +<br /> +<h2 style='display: inline'><span class='inlinecode'>:(){ :|:& };:</span></h2><br /> +<br /> +<span>This one may be widely known already, but I am including it here as I found a cute image illustrating it. But to break <span class='inlinecode'>:(){ :|:& };:</span> down:</span><br /> +<br /> +<ul> +<li><span class='inlinecode'>:(){ }</span> is really a declaration of the function <span class='inlinecode'>:</span></li> +<li>The <span class='inlinecode'>;</span> is ending the current statement</li> +<li>The <span class='inlinecode'>:</span> at the end is calling the function <span class='inlinecode'>:</span></li> +<li><span class='inlinecode'>:|:&</span> is the function body</li> +</ul><br /> +<span>Let's break down the function body <span class='inlinecode'>:|:&</span>: </span><br /> +<br /> +<ul> +<li>The first <span class='inlinecode'>:</span> is calling the function recursively</li> +<li>The <span class='inlinecode'>|:</span> is piping the output to the function <span class='inlinecode'>:</span> again (parallel recursion)</li> +<li>The <span class='inlinecode'>&</span> lets it run in the background.</li> +</ul><br /> +<span>So, it's a fork bomb. If you run it, your computer will run out of resources eventually. (Modern Linux distributions could have reasonable limits configured for your login session, so it won't bring down your whole system anymore unless you run it as <span class='inlinecode'>root</span>!)</span><br /> +<br /> +<span>And here is the cute illustration:</span><br /> +<br /> +<a href='./2023-12-10-bash-golf-part-3/bash-fork-bomb.jpg'><img alt='Bash fork bomb' title='Bash fork bomb' src='./2023-12-10-bash-golf-part-3/bash-fork-bomb.jpg' /></a><br /> +<br /> +<h2 style='display: inline'>Inner functions</h2><br /> +<br /> +<span>Bash defines variables as it is interpreting the code. The same applies to function declarations. Let's consider this code:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><i><font color="#9A1900">#!/usr/bin/env bash</font></i> + +<b><font color="#000000">outer()</font></b> { + <b><font color="#000000">inner()</font></b> { + echo <font color="#FF0000">'Intel inside!'</font> + } + inner +} + +inner +outer +inner +</pre> +<br /> +<span>And let's execute it:</span><br /> +<br /> +<pre> +❯ ./inner.sh +/tmp/inner.sh: line 10: inner: command not found +Intel inside! +Intel inside! +</pre> +<br /> +<span>What happened? The first time <span class='inlinecode'>inner</span> was called, it wasn't defined yet. That only happens after the <span class='inlinecode'>outer</span> run. Note that <span class='inlinecode'>inner</span> will still be globally defined. But functions can be declared multiple times (the last version wins):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><i><font color="#9A1900">#!/usr/bin/env bash</font></i> + +<b><font color="#000000">outer1()</font></b> { + <b><font color="#000000">inner()</font></b> { + echo <font color="#FF0000">'Intel inside!'</font> + } + inner +} + +<b><font color="#000000">outer2()</font></b> { + <b><font color="#000000">inner()</font></b> { + echo <font color="#FF0000">'Wintel inside!'</font> + } + inner +} + +outer1 +inner +outer2 +inner +</pre> +<br /> +<span>And let's run it:</span><br /> +<br /> +<pre> +❯ ./inner2.sh +Intel inside! +Intel inside! +Wintel inside! +Wintel inside! +</pre> +<br /> +<h2 style='display: inline'>Exporting functions</h2><br /> +<br /> +<span>Have you ever wondered how to execute a shell function in parallel through <span class='inlinecode'>xargs</span>? The problem is that this won't work:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><i><font color="#9A1900">#!/usr/bin/env bash</font></i> + +<b><font color="#000000">some_expensive_operations()</font></b> { + echo <font color="#FF0000">"Doing expensive operations with '$1' from pid $$"</font> +} + +<b><font color="#0000FF">for</font></b> i <b><font color="#0000FF">in</font></b> {<font color="#993399">0</font><font color="#990000">..</font><font color="#993399">9</font>}<font color="#990000">;</font> <b><font color="#0000FF">do</font></b> echo <font color="#009900">$i</font><font color="#990000">;</font> <b><font color="#0000FF">done</font></b> <font color="#990000">\</font> + <font color="#990000">|</font> xargs -P<font color="#993399">10</font> -I{} bash -c <font color="#FF0000">'some_expensive_operations "{}"'</font> +</pre> +<br /> +<span>We try here to run ten parallel processes; each of them should run the <span class='inlinecode'>some_expensive_operations</span> function with a different argument. The arguments are provided to <span class='inlinecode'>xargs</span> through <span class='inlinecode'>STDIN</span> one per line. When executed, we get this:</span><br /> +<br /> +<pre> +❯ ./xargs.sh +bash: line 1: some_expensive_operations: command not found +bash: line 1: some_expensive_operations: command not found +bash: line 1: some_expensive_operations: command not found +bash: line 1: some_expensive_operations: command not found +bash: line 1: some_expensive_operations: command not found +bash: line 1: some_expensive_operations: command not found +bash: line 1: some_expensive_operations: command not found +bash: line 1: some_expensive_operations: command not found +bash: line 1: some_expensive_operations: command not found +bash: line 1: some_expensive_operations: command not found +</pre> +<br /> +<span>There's an easy solution for this. Just export the function! It will then be magically available in any sub-shell!</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><i><font color="#9A1900">#!/usr/bin/env bash</font></i> + +<b><font color="#000000">some_expensive_operations()</font></b> { + echo <font color="#FF0000">"Doing expensive operations with '$1' from pid $$"</font> +} +<b><font color="#0000FF">export</font></b> -f some_expensive_operations + +<b><font color="#0000FF">for</font></b> i <b><font color="#0000FF">in</font></b> {<font color="#993399">0</font><font color="#990000">..</font><font color="#993399">9</font>}<font color="#990000">;</font> <b><font color="#0000FF">do</font></b> echo <font color="#009900">$i</font><font color="#990000">;</font> <b><font color="#0000FF">done</font></b> <font color="#990000">\</font> + <font color="#990000">|</font> xargs -P<font color="#993399">10</font> -I{} bash -c <font color="#FF0000">'some_expensive_operations "{}"'</font> +</pre> +<br /> +<span>When we run this now, we get:</span><br /> +<br /> +<pre> +❯ ./xargs.sh +Doing expensive operations with '0' from pid 132831 +Doing expensive operations with '1' from pid 132832 +Doing expensive operations with '2' from pid 132833 +Doing expensive operations with '3' from pid 132834 +Doing expensive operations with '4' from pid 132835 +Doing expensive operations with '5' from pid 132836 +Doing expensive operations with '6' from pid 132837 +Doing expensive operations with '7' from pid 132838 +Doing expensive operations with '8' from pid 132839 +Doing expensive operations with '9' from pid 132840 +</pre> +<br /> +<span>If <span class='inlinecode'>some_expensive_function</span> would call another function, the other function must also be exported. Otherwise, there will be a runtime error again. E.g., this won't work:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><i><font color="#9A1900">#!/usr/bin/env bash</font></i> + +<b><font color="#000000">some_other_function()</font></b> { + echo <font color="#FF0000">"$1"</font> +} + +<b><font color="#000000">some_expensive_operations()</font></b> { + some_other_function <font color="#FF0000">"Doing expensive operations with '$1' from pid $$"</font> +} +<b><font color="#0000FF">export</font></b> -f some_expensive_operations + +<b><font color="#0000FF">for</font></b> i <b><font color="#0000FF">in</font></b> {<font color="#993399">0</font><font color="#990000">..</font><font color="#993399">9</font>}<font color="#990000">;</font> <b><font color="#0000FF">do</font></b> echo <font color="#009900">$i</font><font color="#990000">;</font> <b><font color="#0000FF">done</font></b> <font color="#990000">\</font> + <font color="#990000">|</font> xargs -P<font color="#993399">10</font> -I{} bash -c <font color="#FF0000">'some_expensive_operations "{}"'</font> +</pre> +<br /> +<span>... because <span class='inlinecode'>some_other_function</span> isn't exported! You will also need to add an <span class='inlinecode'>export -f some_other_function</span>!</span><br /> +<br /> +<h2 style='display: inline'>Dynamic variables with <span class='inlinecode'>local</span></h2><br /> +<br /> +<span>You may know that <span class='inlinecode'>local</span> is how to declare local variables in a function. Most don't know that those variables actually have dynamic scope. Let's consider the following example:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><i><font color="#9A1900">#!/usr/bin/env bash</font></i> + +<b><font color="#000000">foo()</font></b> { + <b><font color="#0000FF">local</font></b> <font color="#009900">foo</font><font color="#990000">=</font>bar <i><font color="#9A1900"># Declare local/dynamic variable</font></i> + bar + echo <font color="#FF0000">"$foo"</font> +} + +<b><font color="#000000">bar()</font></b> { + echo <font color="#FF0000">"$foo"</font> + <font color="#009900">foo</font><font color="#990000">=</font>baz +} + +<font color="#009900">foo</font><font color="#990000">=</font>foo <i><font color="#9A1900"># Declare global variable</font></i> +foo <i><font color="#9A1900"># Call function foo</font></i> +echo <font color="#FF0000">"$foo"</font> +</pre> +<br /> +<span>Let's pause a minute. What do you think the output would be?</span><br /> +<br /> +<span>Let's run it:</span><br /> +<br /> +<pre> +❯ ./dynamic.sh +bar +baz +foo +</pre> +<br /> +<span>What happened? The variable <span class='inlinecode'>foo</span> (declared with <span class='inlinecode'>local</span>) is available in the function it was declared in and in all other functions down the call stack! We can even modify the value of <span class='inlinecode'>foo</span>, and the change will be visible up the call stack. It's not a global variable; on the last line, <span class='inlinecode'>echo "$foo"</span> echoes the global variable content.</span><br /> +<br /> +<br /> +<h2 style='display: inline'><span class='inlinecode'>if</span> conditionals</h2><br /> +<br /> +<span>Consider all variants here more or less equivalent:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><i><font color="#9A1900">#!/usr/bin/env bash</font></i> + +<b><font color="#0000FF">declare</font></b> -r <font color="#009900">foo</font><font color="#990000">=</font>foo +<b><font color="#0000FF">declare</font></b> -r <font color="#009900">bar</font><font color="#990000">=</font>bar + +<b><font color="#0000FF">if</font></b> <font color="#990000">[</font> <font color="#FF0000">"$foo"</font> <font color="#990000">=</font> foo <font color="#990000">];</font> <b><font color="#0000FF">then</font></b> + <b><font color="#0000FF">if</font></b> <font color="#990000">[</font> <font color="#FF0000">"$bar"</font> <font color="#990000">=</font> bar <font color="#990000">];</font> <b><font color="#0000FF">then</font></b> + echo ok1 + <b><font color="#0000FF">fi</font></b> +<b><font color="#0000FF">fi</font></b> + +<b><font color="#0000FF">if</font></b> <font color="#990000">[</font> <font color="#FF0000">"$foo"</font> <font color="#990000">=</font> foo <font color="#990000">]</font> <font color="#990000">&&</font> <font color="#990000">[</font> <font color="#FF0000">"$bar"</font> <font color="#990000">==</font> bar <font color="#990000">];</font> <b><font color="#0000FF">then</font></b> + echo ok2a +<b><font color="#0000FF">fi</font></b> + +<font color="#990000">[</font> <font color="#FF0000">"$foo"</font> <font color="#990000">=</font> foo <font color="#990000">]</font> <font color="#990000">&&</font> <font color="#990000">[</font> <font color="#FF0000">"$bar"</font> <font color="#990000">==</font> bar <font color="#990000">]</font> <font color="#990000">&&</font> echo ok2b + +<b><font color="#0000FF">if</font></b> <font color="#990000">[[</font> <font color="#FF0000">"$foo"</font> <font color="#990000">=</font> foo <font color="#990000">&&</font> <font color="#FF0000">"$bar"</font> <font color="#990000">==</font> bar <font color="#990000">]];</font> <b><font color="#0000FF">then</font></b> + echo ok3a +<b><font color="#0000FF">fi</font></b> + + <font color="#990000">[[</font> <font color="#FF0000">"$foo"</font> <font color="#990000">=</font> foo <font color="#990000">&&</font> <font color="#FF0000">"$bar"</font> <font color="#990000">==</font> bar <font color="#990000">]]</font> <font color="#990000">&&</font> echo ok3b + +<b><font color="#0000FF">if</font></b> <b><font color="#0000FF">test</font></b> <font color="#FF0000">"$foo"</font> <font color="#990000">=</font> foo <font color="#990000">&&</font> <b><font color="#0000FF">test</font></b> <font color="#FF0000">"$bar"</font> <font color="#990000">=</font> bar<font color="#990000">;</font> <b><font color="#0000FF">then</font></b> + echo ok4a +<b><font color="#0000FF">fi</font></b> + +<b><font color="#0000FF">test</font></b> <font color="#FF0000">"$foo"</font> <font color="#990000">=</font> foo <font color="#990000">&&</font> <b><font color="#0000FF">test</font></b> <font color="#FF0000">"$bar"</font> <font color="#990000">=</font> bar <font color="#990000">&&</font> echo ok4b +</pre> +<br /> +<span>The output we get is:</span><br /> +<br /> +<pre> +❯ ./if.sh +ok1 +ok2a +ok2b +ok3a +ok3b +ok4a +ok4b +</pre> +<br /> +<h2 style='display: inline'>Multi-line comments</h2><br /> +<br /> +<span>You all know how to comment. Put a <span class='inlinecode'>#</span> in front of it. You could use multiple single-line comments or abuse heredocs and redirect it to the <span class='inlinecode'>:</span> no-op command to emulate multi-line comments. </span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><i><font color="#9A1900">#!/usr/bin/env bash</font></i> + +<i><font color="#9A1900"># Single line comment</font></i> + +<i><font color="#9A1900"># These are two single line</font></i> +<i><font color="#9A1900"># comments one after another</font></i> + +<font color="#990000">:</font> <font color="#990000"><<</font>COMMENT +This is another way a +multi line comment +could be written<font color="#990000">!</font> +COMMENT +</pre> +<br /> +<span>I will not demonstrate the execution of this script, as it won't print anything! It's obviously not the most pretty way of commenting on your code, but it could sometimes be handy!</span><br /> +<br /> +<h2 style='display: inline'>Don't change it while it's executed</h2><br /> +<br /> +<span>Consider this script:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><i><font color="#9A1900">#!/usr/bin/env bash</font></i> + +echo foo +echo echo baz <font color="#990000">>></font> <font color="#009900">$0</font> +echo bar +</pre> +<br /> +<span>When it is run, it will do:</span><br /> +<br /> +<pre> +❯ ./if.sh +foo +bar +baz +❯ cat if.sh +#!/usr/bin/env bash + +echo foo +echo echo baz >> $0 +echo bar +echo baz +</pre> +<br /> +<span>So what happened? The <span class='inlinecode'>echo baz</span> line was appended to the script while it was still executed! And the interpreter also picked it up! It tells us that Bash evaluates each line as it encounters it. This can lead to nasty side effects when editing the script while it is still being executed! You should always keep this in mind!</span><br /> +<br /> +<br /> +<span>Other related posts are:</span><br /> +<br /> +<a class='textlink' href='./2021-05-16-personal-bash-coding-style-guide.html'>2021-05-16 Personal Bash coding style guide</a><br /> +<a class='textlink' href='./2021-06-05-gemtexter-one-bash-script-to-rule-it-all.html'>2021-06-05 Gemtexter - One Bash script to rule it all</a><br /> +<a class='textlink' href='./2021-11-29-bash-golf-part-1.html'>2021-11-29 Bash Golf Part 1</a><br /> +<a class='textlink' href='./2022-01-01-bash-golf-part-2.html'>2022-01-01 Bash Golf Part 2</a><br /> +<a class='textlink' href='./2023-12-10-bash-golf-part-3.html'>2023-12-10 Bash Golf Part 3 (You are currently reading this)</a><br /> +<br /> +<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br /> +<br /> +<a class='textlink' href='../'>Back to the main site</a><br /> + </div> + </content> + </entry> + <entry> + <title>Site Reliability Engineering - Part 2: Operational Balance in SRE</title> + <link href="gemini://foo.zone/gemfeed/2023-11-19-site-reliability-engineering-part-2.gmi" /> + <id>gemini://foo.zone/gemfeed/2023-11-19-site-reliability-engineering-part-2.gmi</id> + <updated>2023-11-19T00:18:18+03:00</updated> + <author> + <name>Paul Buetow aka snonux</name> + <email>paul@dev.buetow.org</email> + </author> + <summary>This is the second part of my Site Reliability Engineering (SRE) series. I am currently employed as a Site Reliability Engineer and will try to share what SRE is about in this blog series.</summary> + <content type="xhtml"> + <div xmlns="http://www.w3.org/1999/xhtml"> + <h1 style='display: inline'>Site Reliability Engineering - Part 2: Operational Balance in SRE</h1><br /> +<br /> +<span class='quote'>Published at 2023-11-19T00:18:18+03:00</span><br /> +<br /> +<span>This is the second part of my Site Reliability Engineering (SRE) series. I am currently employed as a Site Reliability Engineer and will try to share what SRE is about in this blog series.</span><br /> +<br /> +<a class='textlink' href='./2023-08-18-site-reliability-engineering-part-1.html'>2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture</a><br /> +<a class='textlink' href='./2023-11-19-site-reliability-engineering-part-2.html'>2023-11-19 Site Reliability Engineering - Part 2: Operational Balance in SRE (You are currently reading this)</a><br /> +<a class='textlink' href='./2024-01-09-site-reliability-engineering-part-3.html'>2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture and the Human Aspect</a><br /> +<br /> +<pre> +⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⣾⣷⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ +⠀⠀⠀⠀⣾⠿⠿⠿⠶⠾⠿⠿⣿⣿⣿⣿⣿⣿⠿⠿⠶⠶⠿⠿⠿⣷⠀⠀⠀⠀ +⠀⠀⠀⣸⢿⣆⠀⠀⠀⠀⠀⠀⠀⠙⢿⡿⠉⠀⠀⠀⠀⠀⠀⠀⣸⣿⡆⠀⠀⠀ +⠀⠀⢠⡟⠀⢻⣆⠀⠀⠀⠀⠀⠀⠀⣾⣧⠀⠀⠀⠀⠀⠀⠀⣰⡟⠀⢻⡄⠀⠀ +⠀⢀⣾⠃⠀⠀⢿⡄⠀⠀⠀⠀⠀⢠⣿⣿⡀⠀⠀⠀⠀⠀⢠⡿⠀⠀⠘⣷⡀⠀ +⠀⣼⣏⣀⣀⣀⣈⣿⡀⠀⠀⠀⠀⣸⣿⣿⡇⠀⠀⠀⠀⢀⣿⣃⣀⣀⣀⣸⣧⠀ +⠀⢻⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⣿⣿⣿⣿⠀⠀⠀⠀⠈⢿⣿⣿⣿⣿⣿⡿⠀ +⠀⠀⠉⠛⠛⠛⠋⠁⠀⠀⠀⠀⢸⣿⣿⣿⣿⡆⠀⠀⠀⠀⠈⠙⠛⠛⠛⠉⠀⠀ +⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⣿⣿⣿⣿⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ +⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣾⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ +⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⣿⣿⣿⣿⣿⣆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ +⠀⠀⠀⠀⠀⠀⠴⠶⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠶⠦⠀⠀ +</pre> +<br /> +<h2 style='display: inline'>Operational Balance in SRE: Finding the Equilibrium in Reliability and Velocity</h2><br /> +<br /> +<span>Site Reliability Engineering has established itself as more than just a set of best practices or methodologies. Instead, it stands as a beacon of operational excellence, which guides engineering teams through the turbulent waters of modern software development and system management.</span><br /> +<br /> +<span>In the universe of software production, two fundamental forces are often at odds: The drive for rapid feature release (velocity) and the need for system reliability. Traditionally, the faster teams moved, the more risk was introduced into systems. SRE offers a approach to mitigate these conflicting drives through concepts like error budgets and SLIs/SLOs. These mechanisms offer a tangible metric, allowing teams to quantify how much they can push changes while ensuring they don't compromise system health. Thus, the error budget becomes a balancing act, where teams weigh the trade-offs between innovation and reliability.</span><br /> +<br /> +<span>An important part of this balance is the dichotomy between operations and coding. According to SRE principles, an engineer should ideally spend an equal amount of time on operations work and coding - 50% on each. This isn't just a random metric; it's a reflection of the value SRE places on both maintaining operational excellence and progressing forward with innovations. This balance ensures that while SREs are solving today's problems, they are also preparing for tomorrow's challenges. </span><br /> +<br /> +<span>However, not all operational tasks are equal. SRE differentiates between "ops work" and "toil". While ops work is integral to system maintenance and can provide value, toil represents repetitive, mundane tasks which offer little value in the long run. Recognising and minimising toil is crucial. A culture that allows engineers to drown in toil stifles innovation and growth. Hence, an organisation's approach to toil indicates its operational health and commitment to balance.</span><br /> +<br /> +<span>A cornerstone of achieving operational balance lies in the tools and processes SREs use. Effective monitoring, observability tools, and ensuring that tools can handle high cardinality data are foundational. These aren't just technical requisites but reflective of an organisational culture prioritising proactive problem-solving. By having systems that effectively flag potential issues before they escalate, SREs can maintain the balance between system stability and forward momentum.</span><br /> +<br /> +<span>Moreover, operational balance isn't just a technological or process challenge; it's a human one. The health of on-call engineers is as crucial as the health of the services they manage. On-call postmortems, continuous feedback loops, and recognising gaps (be it tooling, operational expertise, or resources) ensure that the human elements of operations are noticed. </span><br /> +<br /> +<span>In conclusion, operational balance in SRE isn't static thing but an ongoing journey. It requires organisations to constantly evaluate their practices, tools, and, most importantly, their culture. By achieving this balance, organisations can ensure that they have time for innovation while maintaining the robustness and reliability of their systems, resulting in sustainable long-term success.</span><br /> +<br /> +<span>That all sounds very romantic. The truth is, it's brutal to archive the perfect balance. No system will ever be perfect. But at least we should aim for it!</span><br /> +<br /> +<span>Continue with the third part of this series:</span><br /> +<br /> +<a class='textlink' href='./2024-01-09-site-reliability-engineering-part-3.html'>2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture and the Human Aspect</a><br /> +<br /> +<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br /> +<br /> +<a class='textlink' href='../'>Back to the main site</a><br /> + </div> + </content> + </entry> diff --git a/gemfeed/index.gmi b/gemfeed/index.gmi index 7be03b42..75affaf6 100644 --- a/gemfeed/index.gmi +++ b/gemfeed/index.gmi @@ -2,6 +2,7 @@ ## To be in the .zone! +=> ./2024-01-13-one-reason-why-i-love-openbsd.gmi 2024-01-13 - One reason why I love OpenBSD => ./2024-01-09-site-reliability-engineering-part-3.gmi 2024-01-09 - Site Reliability Engineering - Part 3: On-Call Culture and the Human Aspect => ./2023-12-10-bash-golf-part-3.gmi 2023-12-10 - Bash Golf Part 3 => ./2023-11-19-site-reliability-engineering-part-2.gmi 2023-11-19 - Site Reliability Engineering - Part 2: Operational Balance in SRE |
