summaryrefslogtreecommitdiff
path: root/gemfeed/atom.xml
diff options
context:
space:
mode:
Diffstat (limited to 'gemfeed/atom.xml')
-rw-r--r--gemfeed/atom.xml711
1 files changed, 477 insertions, 234 deletions
diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml
index 007f2136..9747644f 100644
--- a/gemfeed/atom.xml
+++ b/gemfeed/atom.xml
@@ -1,12 +1,252 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
- <updated>2022-05-28T18:39:47+01:00</updated>
+ <updated>2022-06-15T08:48:29+01:00</updated>
<title>foo.zone feed</title>
<subtitle>To be in the .zone!</subtitle>
<link href="https://foo.zone/gemfeed/atom.xml" rel="self" />
<link href="https://foo.zone/" />
<id>https://foo.zone/</id>
<entry>
+ <title>Sweating the small stuff - Tiny projects of mine</title>
+ <link href="https://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff.html" />
+ <id>https://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff.html</id>
+ <updated>2022-06-15T08:47:44+01:00</updated>
+ <author>
+ <name>Paul Buetow</name>
+ <email>comments@mx.buetow.org</email>
+ </author>
+ <summary>This blog post is a bit different from the others. It consists of multiple but smaller projects worth mentioning. I got inspired by Julia Evan's 'Tiny programs' blog post and the side projects of The Sephist, so I thought I would also write a blog posts listing a couple of small projects of mine:. .....to read on please visit my site.</summary>
+ <content type="xhtml">
+ <div xmlns="http://www.w3.org/1999/xhtml">
+ <h1>Sweating the small stuff - Tiny projects of mine</h1>
+<pre>
+ _
+ /_/_ .'''.
+ =O(_)))) ...' `.
+ jgs \_\ `. .'''
+ `..'
+</pre><br />
+<p class="quote"><i>Published by Paul at 2022-06-15</i></p>
+<p>This blog post is a bit different from the others. It consists of multiple but smaller projects worth mentioning. I got inspired by Julia Evan's "Tiny programs" blog post and the side projects of The Sephist, so I thought I would also write a blog posts listing a couple of small projects of mine:</p>
+<a class="textlink" href="https://jvns.ca/blog/2022/03/08/tiny-programs/">Tiny programs</a><br />
+<a class="textlink" href="https://thesephist.com/projects/">The Sephist's project list</a><br />
+<p>Working on tiny projects is a lot of fun as you don't need to worry about any standards or code reviews and you decide how and when you work on it. There aren't restrictions regarding technologies used. You are likely the only person working on these tiny projects and that means that there is no conflict with any other developers. This is complete freedom :-).</p>
+<p>But before going through the tiny projects let's take a paragraph for the <span class="inlinecode">1y</span> anniversary retrospective.</p>
+<h2><span class="inlinecode">1y</span> anniversary</h2>
+<p>It has been one year since I started posting regularly (at least once monthly) on this blog again. It has been a lot of fun (and work) doing so for various reasons:</p>
+<ul>
+<li>I practice English writing (I am not a native speaker). I am far from being a novelist, but this blog helps improves my writing skills. I also tried out tools like Grammarly.com and Languagetool.org and also worked with <span class="inlinecode">:spell</span> in Vim or the LibreOffice checker. This post was checked with the <span class="inlinecode">write-better</span> Node application. </li>
+<li>I force myself to "finish" some kind of project worth writing about every month. If its not a project, then its still a topic which requires research and deep thinking. Producing 2k words of text can actually be challenging.</li>
+<li>It's fun to rely on KISS (keep it simple &amp; stupid) tools. E.g. use of Gemtexter and not WordPress, use of Vim instead of an office suite or a rich web editor.</li>
+</ul>
+<p>Retrospectively, these have been the most popular blog posts of mine over the last year:</p>
+<a class="textlink" href="https://foo.zone/gemfeed/2021-09-12-keep-it-simple-and-stupid.html">Keep it simple and stupid</a><br />
+<a class="textlink" href="https://foo.zone/gemfeed/2022-04-10-creative-universe.html">Creative universe</a><br />
+<a class="textlink" href="https://foo.zone/gemfeed/2021-11-29-bash-golf-part-1.html">Bash Golf series</a><br />
+<a class="textlink" href="https://foo.zone/gemfeed/2021-12-26-how-to-stay-sane-as-a-devops-person.html">How to stay sane as a DevOps person</a><br />
+<a class="textlink" href="https://foo.zone/gemfeed/2022-05-27-perl-is-still-a-great-choice.html">Perl is still a great choice</a><br />
+<p>But now, let's continue with the small projects worth mentioning :-)</p>
+<h2>Static photo album generator</h2>
+<p><span class="inlinecode">photoalbum.sh</span> is a minimal static HTML photo album generator. I use it to drive "The Irregular Ninja" site and for some ad-hoc (personal) albums to share photos with the family and friends.</p>
+<a class="textlink" href="https://codeberg.org/snonux/photoalbum">https://codeberg.org/snonux/photoalbum</a><br />
+<h3>The Irregular Ninja</h3>
+<p>Photography is one of my casual hobbies. I love to capture interesting perspectives and motifs. I love to walk new streets and neighbourhoods I never walked before so I can capture those unexpected motifs, colours and moments. Unfortunately, because of time constraints (and sometime weather constraints), I do that on a pretty infrequent basis.</p>
+<a href="https://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff/ninja.jpg"><img src="https://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff/ninja.jpg" /></a><br />
+<p>More than 10 years ago I wrote the bespoke small static photo album generator in Bash <span class="inlinecode">photoalbum.sh</span> which I recently refactored it to a modern Bash coding style and also freshened up the Cascading Style Sheets. Last but not least, the new domain name <span class="inlinecode">irregular.ninja</span> has been registered.</p>
+<p>The thumbnails are presented in a random order and there are also random CSS effects for each preview. There's also a simple background blur for each page generated. And that's all in less than 300 lines of Bash code! The script requires ImageMagick (available for all common Linux and *BSD distributions) to be installed.</p>
+<p>As you can see, there is a lot of randomization and irregularity going on. Thus, the name "Irregular Ninja" was born.</p>
+<a class="textlink" href="https://irregular.ninja">https://irregular.ninja</a><br />
+<p>I only use a digital compact camera or a smartphone to take the photos. I don't like the idea of carrying around a big camera with me "just in case" so I keep it small and simple. The best camera is the camera you have with you. :-)</p>
+<p>I hope you like this photo site. It's worth checking it out again around once every other month!</p>
+<h2>Random journal page extractor</h2>
+<p>I bullet journal. I write my notes into a Leuchtturm paper notebook. Once full, I am scanning it to a PDF file and archive it. As of writing this, I am at journal #7 (each from 123 up to 251 pages in A5). It means that there is a lot of material already.</p>
+<p>Once in a while I want to revisit older notes and ideas. For that I have written a simple Bash script <span class="inlinecode">randomjournalpage.sh</span> which randomly picks a PDF file from a folder and extracts 42 pages from it at a random page offset and opens them in a PDF viewer (Evince in this case, as I am a GNOME user). </p>
+<a class="textlink" href="https://codeberg.org/snonux/randomjournalpage">https://codeberg.org/snonux/randomjournalpage</a><br />
+<p>There's also a weekly <span class="inlinecode">CRON</span> job on my servers to send me a reminder that I might want to read in my old journals again. My laptop also runs this script each time it boots and saves the output to a NextCloud folder. From there, it's synchronized to the NextCloud server so I can pick it up from there with my smartphone later when I am "on the road".</p>
+<h2>Global uptime records statistic generator</h2>
+<p><span class="inlinecode">guprecords</span> is a Perl script which reads multiple <span class="inlinecode">uprecord</span> files (produced by <span class="inlinecode">uptimed</span> - a widely available daemon for recording server uptimes) and generates uptime statistics of multiple hosts combined. I keep all the record files of all my personal computers in a Git repository (I even keep the records of the boxes I don't own or use anymore) and there's already quite a collection of it. It looks like this:</p>
+<pre>
+❯ perl ~/git/guprecords/src/guprecords --indir=./stats/ --count=20 --all
+Pos | System | Kernel | Uptime | Boot time
+ 1 | sun | FreeBSD 10.1-RELEA.. | 502d 03:29:19 | Sun Aug 16 15:56:40 2015
+ 2 | vulcan | Linux 3.10.0-1160... | 313d 13:19:39 | Sun Jul 25 18:32:25 2021
+ 3 | uugrn | FreeBSD 10.2-RELEASE | 303d 15:19:35 | Tue Dec 22 21:33:07 2015
+ 4 | uugrn | FreeBSD 11.0-RELEA.. | 281d 14:38:04 | Fri Oct 21 15:22:02 2016
+ 5 | deltavega | Linux 3.10.0-957.2.. | 279d 11:15:00 | Sun Jun 30 11:42:38 2019
+ 6 | vulcan | Linux 3.10.0-957.2.. | 279d 11:12:14 | Sun Jun 30 11:43:41 2019
+ 7 | deltavega | Linux 3.10.0-1160... | 253d 04:42:22 | Sat Apr 24 13:34:34 2021
+ 8 | host0 | FreeBSD 6.2-RELEAS.. | 240d 02:23:23 | Wed Jan 31 20:34:46 2007
+ 9 | uugrn | FreeBSD 11.1-RELEA.. | 202d 21:12:41 | Sun May 6 18:06:17 2018
+ 10 | tauceti | Linux 3.2.0-4-amd64 | 197d 18:45:40 | Mon Dec 16 19:47:54 2013
+ 11 | pluto | Linux 2.6.32-5-amd64 | 185d 11:53:04 | Wed Aug 1 07:34:10 2012
+ 12 | sun | FreeBSD 10.3-RELEA.. | 164d 22:31:55 | Sat Jul 22 18:47:21 2017
+ 13 | vulcan | Linux 3.10.0-1160... | 161d 07:08:43 | Sun Feb 14 10:05:38 2021
+ 14 | sun | FreeBSD 10.3-RELEA.. | 158d 21:18:36 | Sat Jan 27 10:18:57 2018
+ 15 | uugrn | FreeBSD 11.1-RELEA.. | 157d 20:57:24 | Fri Nov 3 05:02:54 2017
+ 16 | tauceti-f | Linux 3.2.0-3-amd64 | 150d 04:12:38 | Mon Sep 16 09:02:58 2013
+ 17 | tauceti | Linux 3.2.0-4-amd64 | 149d 09:21:43 | Mon Aug 11 09:47:50 2014
+ 18 | pluto | Linux 3.2.0-4-amd64 | 142d 02:57:31 | Mon Sep 8 01:59:02 2014
+ 19 | tauceti-f | Linux 3.2.0-3-amd64 | 132d 22:46:26 | Mon May 6 11:11:35 2013
+ 20 | keppler-16b | Darwin 13.4.0 | 131d 08:17:12 | Thu Jun 11 10:44:25 2015
+</pre><br />
+<p>It can also sum up all uptimes for each host to generate a total per host uptime top list:</p>
+<pre>
+❯ perl ~/git/guprecords/src/guprecords --indir=./stats/ --count=20 --total
+Pos | System | Kernel | Uptime |
+ 1 | uranus | Linux 5.4.17-200.f.. | 1419d 19:05:39 |
+ 2 | sun | FreeBSD 10.1-RELEA.. | 1363d 11:41:14 |
+ 3 | vulcan | Linux 3.10.0-1160... | 1262d 20:27:48 |
+ 4 | uugrn | FreeBSD 10.2-RELEASE | 1219d 15:10:16 |
+ 5 | deltavega | Linux 3.10.0-957.2.. | 1115d 06:33:55 |
+ 6 | pluto | Linux 2.6.32-5-amd64 | 1086d 10:44:05 |
+ 7 | tauceti | Linux 3.2.0-4-amd64 | 846d 12:58:21 |
+ 8 | tauceti-f | Linux 3.2.0-3-amd64 | 625d 07:16:39 |
+ 9 | host0 | FreeBSD 6.2-RELEAS.. | 534d 19:50:13 |
+ 10 | keppler-16b | Darwin 13.4.0 | 448d 06:15:00 |
+ 11 | tauceti-e | Linux 3.2.0-4-amd64 | 415d 18:14:13 |
+ 12 | moon | Darwin 18.7.0 | 326d 11:21:42 |
+ 13 | callisto | Linux 4.0.4-303.fc.. | 303d 12:18:24 |
+ 14 | alphacentauri | FreeBSD 10.1-RELEA.. | 300d 20:15:00 |
+ 15 | earth | Linux 5.13.14-200... | 289d 08:05:05 |
+ 16 | makemake | Linux 5.11.9-200.f.. | 286d 21:53:03 |
+ 17 | london | Linux 3.2.0-4-amd64 | 258d 15:10:38 |
+ 18 | fishbone | OpenBSD 4.1 .. | 223d 05:55:26 |
+ 19 | sagittarius | Darwin 15.6.0 | 198d 23:53:59 |
+ 20 | mars | Linux 3.2.0-4-amd64 | 190d 05:44:21 |
+</pre><br />
+<a class="textlink" href="https://codeberg.org/snonux/guprecords">https://codeberg.org/snonux/guprecords</a><br />
+<p>This all is of no real practical use but fun!</p>
+<h2>Server configuration management</h2>
+<p>The <span class="inlinecode">rexfiles</span> project contains all Rex files for my (personal) server setup automation. A <span class="inlinecode">Rexfile</span> is written in a Perl DSL run by the Rex configuration management system. It's pretty much KISS and that's why I love it. It suits my personal needs perfectly. </p>
+<a class="textlink" href="https://codeberg.org/snonux/rexfiles">https://codeberg.org/snonux/rexfiles</a><br />
+<a class="textlink" href="https://www.rexify.org">https://www.rexify.org</a><br />
+<p>This is an E-Mail I posted to the Rex mailing list:</p>
+<p class="quote"><i>Hi there! I was searching for a simple way to automate my personal OpenBSD setup. I found that configuration management systems like Puppet, Salt, Chef, etc.. were too bloated for my personal needs. So for a while I was configuring everything by hand. At one point I got fed up and started writing Shell scripts. But that was not the holy grail so that I looked at Ansible. I found that Ansible had some dependencies on Python on the target machine when you want to use all the features. Furthermore, I am not really familiar with Python. But then I remembered that there was also Rex. It's written in my beloved Perl. Also, OpenBSD comes with Perl in the base system out of the box which makes it integrate better than all my scripts (automation and also scripts deployed via the automation to the system) are all in the same language. Rex may not have all the features like other configuration management systems, but its easy to work-around or extend when you know Perl. Thanks!</i></p>
+<h2>Fancy SSH execution loop</h2>
+<p><span class="inlinecode">rubyfy</span> is a fancy SSH loop wrapper written in Ruby for running shell commands on multiple remote servers at once. I also forked this project for work (under a different name) where I added even more features such as automatic server discovery. It's used by many colleagues on a frequent basis. Here are some examples:</p>
+<pre>
+# Run command 'hostname' on server foo.example.com
+./rubyfy.rb -c 'hostname' &lt;&lt;&lt; foo.example.com
+
+# Run command 'id' as root (via sudo) on all servers listed in the list file
+# Do it on 10 servers in parallel
+./rubyfy.rb --parallel 10 --root --command 'id' &lt; serverlist.txt
+
+# Run a fancy script in background on 50 servers in parallel
+./rubyfy.rb -p 50 -r -b -c '/usr/local/scripts/fancy.zsh' &lt; serverlist.txt
+
+# Grep for specific process on both servers and write output to ./out/grep.txt
+echo {foo,bar}.example.com | ./rubyfy.rb -p 10 -c 'pgrep -lf httpd' -n grep.txt
+
+# Reboot server only if file /var/run/maintenance.lock does NOT exist!
+echo foo.example.com |
+./rubyfy.rb --root --command reboot --precondition /var/run/maintenance.lock
+</pre><br />
+<a class="textlink" href="https://codeberg.org/snonux/rubyfy">https://codeberg.org/snonux/rubyfy</a><br />
+<h2>A KISS dynamic DNS solution</h2>
+<p><span class="inlinecode">dyndns</span> is a tiny shell script which implements "your" own DynDNS service. It relies on SSH access to the authoritative DNS server and the <span class="inlinecode">nsupdate</span> command. There is really no need to use any of the "other" free DynDNS services out there.</p>
+<p>Syntax (this must run from the client connecting to the DNS server through SSH): </p>
+<pre>
+ssh dyndns@dyndnsserver /path/to/dyndns-update \
+ your.host.name. TYPE new-entry TIMEOUT
+</pre><br />
+<p>This is a real world example: </p>
+<pre>
+ssh dyndns@dyndnsserver /path/to/dyndns-update \
+ local.buetow.org. A 137.226.50.91 30
+</pre><br />
+<a class="textlink" href="https://codeberg.org/snonux/dyndns">https://codeberg.org/snonux/dyndns</a><br />
+<h2>CPU information gatherer for Linux</h2>
+<p>This is a tiny GNU Awk script for Linux which displays information about the CPU. All what it does is presenting <span class="inlinecode">/proc/cpuinfo</span> in an easier to read way. The output is somewhat more compact than the standard <span class="inlinecode">lscpu</span> command you find commonly on Linux distributions.</p>
+<pre>
+❯ ./cpuinfo
+cpuinfo (c) 1.0.2 Paul Buetow
+
+ 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz GenuineIntel 12288 KB cache
+
+p = 001 Physical processors
+c = 004 Cores
+s = 008 Siblings (Hyper-Threading enabled if s != c)
+v = 008 [v = p*c*(s != c ? 2 : 1)] Total logical CPUs
+ Hyper-Threading is enabled
+
+0003000 MHz each core
+0012000 MHz total
+0005990 Bogomips each processor (including virtual)
+0023961 Bogomips total
+</pre><br />
+<a class="textlink" href="https://codeberg.org/snonux/cpuinfo">https://codeberg.org/snonux/cpuinfo</a><br />
+<h2>Show differences of two files over the network</h2>
+<p>This is a shell wrapper to use the standard diff tool over the network to compare a file between two computers. It uses NetCat for the network part and also encrypts all traffic using OpenSSL. This is how its used:</p>
+<p>1. Open two terminal windows and login to two different hosts (you could use ClusterSSH or <span class="inlinecode">tmux</span> here). 2. Run on the first host <span class="inlinecode">netdiff otherhost.example.org /file/to/diff.txt</span> and run on the second host <span class="inlinecode">netdiff firsthost.example.org /file/to/diff.txt</span>. 3. You then will see the file differences.</p>
+<a class="textlink" href="https://codeberg.org/snonux/netdiff">https://codeberg.org/snonux/netdiff</a><br />
+<h2>Delay sending out E-Mails with Mutt</h2>
+<p>This is a shell script for the Mutt email client for delaying sending out E-Mails. For example, you want to write an email on Saturday but don't want to bother the recipient earlier than Monday. It relies on CRON.</p>
+<a class="textlink" href="https://codeberg.org/snonux/muttdelay">https://codeberg.org/snonux/muttdelay</a><br />
+<h2>Graphical UI for sending text messages</h2>
+<p><span class="inlinecode">jsmstrade</span> is a minimalistic graphical Java swing client for sending SMS messages over the SMStrade service.</p>
+<a href="https://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff/jsmstrade.png"><img src="https://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff/jsmstrade.png" /></a><br />
+<a class="textlink" href="https://codeberg.org/snonux/jsmstrade">https://codeberg.org/snonux/jsmstrade</a><br />
+<a class="textlink" href="https://smstrade.de">https://smstrade.de</a><br />
+<h2>IPv6 and IPv4 connectivity testing site</h2>
+<p><span class="inlinecode">ipv6test</span> is a quick and dirty Perl CGI script for testing whether your browser connects via IPv4 or IPv6. It requires you to setup three sub-domains: One reachable only via IPv4 (e.g. <span class="inlinecode">test4.ipv6.buetow.org</span>), another reachable only via IPv6 (e.g. <span class="inlinecode">test6.ipv6.buetow.org</span>) and the main one reachable through both protocols (e.g. <span class="inlinecode">ipv6.buetow.org</span>).</p>
+<p>I don't have it running on any of my servers at the moment. This means that there is no demo to show now. Sorry!</p>
+<h2>List open Jira tickets in the terminal</h2>
+<p><span class="inlinecode">japi</span> s a small Perl script for listing open Jira issues. It might be broken by now as the Jira APIs may have changed. Sorry! But feel free to fork and modernize it. :-)</p>
+<p> </p>
+<h2>Debian running on "your" Android phone</h2>
+<p>Debroid is a tutorial and a set of scripts to install and to run a Debian <span class="inlinecode">chroot</span> on an Android phone.</p>
+<a class="textlink" href="https://foo.zone/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid.html">Check out my previous post about it</a><br />
+<p>I am not using Debroid anymore as I have switched to Termux now.</p>
+<a class="textlink" href="https://termux.com">https://termux.com</a><br />
+<h2>Perl service framework</h2>
+<p>PerlDaemon is a minimal daemon for Linux and other Unix like operating systems programmed in Perl. It is a minimal but pretty functional and fairly generic service framework. This means that it does not do anything useful other than providing a framework for starting, stopping, configuring and logging. To do something useful, a module (written in Perl) must be provided.</p>
+<a class="textlink" href="https://foo.zone/gemfeed/2011-05-07-perl-daemon-service-framework.html">Checkout my previous post about it</a><br />
+<h2>More</h2>
+<p>There are more projects on my Codeberg page but they aren't as tiny as the ones mentioned in this post or aren't finished yet so I won't bother listing them here. However, there also a few more scripts used frequently by me (not publicly accessible (yet?)) which I would like to mention here:</p>
+<h3>Work time tracker</h3>
+<p><span class="inlinecode">worktime.rb</span>, for example, is a command line Ruby script I use to track my time spent working. This is to make sure that I don't overwork (in particular useful when working from home). It also generates some daily and weekly stats and carries over work time (surpluses or minuses) to the next work day, week or even year.</p>
+<p>It has some special features such as tracking time for self-improvement/development, days off and time spent at the lunch break and time spent on Pet Projects.</p>
+<p>An example weekly report looks like this (I often don't track my lunch time but what I do instead I stop the work timer when I go out for lunch and start the work timer once back at the desk):</p>
+<pre>
+ Mon 20211213 50: work:5.92h
+ Tue 20211214 50: work:7.47h lunch:0.50h pet:0.42h
+ Wed 20211215 50: work:8.86h pet:0.50h
+ Thu 20211216 50: work:8.02h pet:0.50h
+ Fri 20211217 50: work:9.81h
+ * Sat 20211218 50: work:0.00h selfdevelopment:1.00h
+ * Sun 20211219 50: work:2.08h pet:1.00h selfdevelopment:-2.08h
+================================================
+ balance:0.06h work:42.15h lunch:0.50h pet:2.42h selfdevelopment:-1.08h buffer:8.38h
+</pre><br />
+<p>All I do when I start work is to run the <span class="inlinecode">wtlogin</span> command and after finishing work to run the <span class="inlinecode">wtlogout</span> command. My shell will remind me when I work without having logged in. It uses a simple JSON database which is editable with <span class="inlinecode">wtedit</span> (this opens the JSON in Vim). The report shown above can be generated with <span class="inlinecode">wtreport</span>. Any out-of-bounds reporting can be added with the <span class="inlinecode">wtadd</span> command.</p>
+<h3>Password and document store</h3>
+<p><span class="inlinecode">geheim.rb</span> is my personal password and document store ("geheim" is the German word for secret). It's written in Ruby and heavily relies on Git, FZF (for search), Vim and standard encryption algorithms. Other than the standard <span class="inlinecode">pass</span> Unix password manager, <span class="inlinecode">geheim</span> also encrypts the file names and password titles.</p>
+<p>The tool is command line driven but also provides an interactive shell when invoked with <span class="inlinecode">geheim shell</span>. It also works on my Android phone via Termux so I have all my documents and passwords always with me. </p>
+<h3>Backup procedure</h3>
+<p><span class="inlinecode">backup</span> is a Bash script which does run once daily (or every time on boot) on my home FreeBSD NAS server and performs backup related tasks such as creating a local backup of my remote NextCloud instance, creating encrypted (incremental) ZFS snapshots of everything what's stored on the NAS and synchronizes (via <span class="inlinecode">rsync</span>) backups to a remote cloud storage. It also can synchronize backups to a local external USB drive.</p>
+<a class="textlink" href="https://foo.zone/gemfeed/2016-04-03-offsite-backup-with-zfs.html">Check out my offsite backup series</a><br />
+<h2>konpeito.media</h2>
+<p>Here's a bonus...</p>
+<pre>
+ ▄ █ ▄ ▄ █ ▄ ▄ █ ▄
+ ▄▀█▀▄ ▄▀█▀▄ ▄▀█▀▄
+ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▀ ▀ ▀
+ █ ▄▄ ▄▄ █
+ █ █ █▀▀▀█ █ █ █ ▄▀ ▄▀▀▀▀▄ █▄ █ █▀▀▀▀▀▄ ▄▀▀▀▀▄ █ ▀▀▀█▀▀▀ ▄▀▀▀▀▄
+ █ ▀▀▀▀▀▀▀▀▀ █ █ ▄█ █ █ █ ▀▄ █ █▄▄▄▄▄▀ █▄▄▄▄▄▄█ █ █ █ █
+ █ ▄▀▀▀▀▀▀▀▀▀▀▀▄ █ █▀ ▀▄ ▀▄ ▄▀ █ ▀▄█ █ ▀▄ ▄ █ █ ▀▄ ▄▀
+ ▀▄█▄█▄▄▄▄▄▄▄█▄█▄▀ ▀ ▀ ▀▀▀▀ ▀ ▀ ▀ ▀▀▀▀ ▀ ▀ ▀▀▀
+</pre><br />
+<p>*THIS ISN'T MY PROJECT* but I found KONPEITO an interesting Gemini capsule. It's a quarterly released Low-Fi music mix tape distributed only through Gemini (and not the web). </p>
+<a class="textlink" href="https://konpeito.media">https://konpeito.media</a><br />
+<p>If you wonder what Gemini is:</p>
+<a class="textlink" href="https://foo.zone/gemfeed/2021-04-24-welcome-to-the-geminispace.html">Welcome to the Geminispae</a><br />
+<p>E-Mail me your comments to paul at buetow dot org!</p>
+ </div>
+ </content>
+ </entry>
+ <entry>
<title>Perl is still a great choice</title>
<link href="https://foo.zone/gemfeed/2022-05-27-perl-is-still-a-great-choice.html" />
<id>https://foo.zone/gemfeed/2022-05-27-perl-is-still-a-great-choice.html</id>
@@ -95,7 +335,7 @@
<a class="textlink" href="https://www.perl.com/article/on-sigils/">https://www.perl.com/article/on-sigils/</a><br />
<h2>Where do I personally still use perl?</h2>
<ul>
-<li>I use Rexify for my OpenBSD server automation. Rexify is a configuration management system programming in Perl with similar features to Ansible but less bloated. It fits my personal needs perfectly.</li>
+<li>I use Rexify for my OpenBSD server automation. Rexify is a configuration management system developed in Perl with similar features to Ansible but less bloated. It suits my personal needs perfectly.</li>
<li>I have written a couple of smaller to medium-sized Perl scripts which I (mostly) still use regularly. You can find them on my Codeberg page.</li>
<li>My day-to-day workflow heavily relies on "ack-grep". Ack is a tool developed in Perl aimed at programmers and can be used for quick searches on source code at the command line.</li>
<li>I aim to leave my OpenBSD servers as "vanilla" as possible (trying to rely only on the standard/base installation without installing additional software from the packaging system or ports tree). All my scripts are written either Bourne shell or in Perl here. So there is no need to install additional interpreters.</li>
@@ -139,7 +379,7 @@
. . . . * . * . +.. . *
. . . . . . . . + . . +
- the universe
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2022-04-10, last updated at 2022-04-18</i></p>
<h2>Prelude</h2>
<p>I have been participating in an annual work-internal project contest (we call it Pet Project contest) since I moved to London and switched jobs to my current employer. I am very happy to say that I won a "silver" prize last week here 🎆. Over the last couple of years I have been a finalist in this contest six times and won some kind of prize five times. Some of my projects were also released as open source software. One had a magazine article published, and for another one I wrote an article on my employer's engineering blog. If you have followed all my posts on this blog (the one you are currently reading), then you have probably figured out what these projects were:</p>
@@ -196,7 +436,7 @@ learn () {
perltidy - a perl script indenter and reformatter
❯ learn
timedatectl - Control the system time and date
-</pre>
+</pre><br />
<h2>Conclusion</h2>
<p>This all summarises advice I have, really.  I hope this was interesting and helpful for you.</p>
<p>I have one more small tip: I never published a blog post the same day I wrote it. After finishing writing it, I always wait for a couple of days. In all cases so far, I had an additional idea to add or to fine tune the blog post.</p>
@@ -239,7 +479,7 @@ learn () {
] ~ ~ |
| |
| |
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2022-03-06</i></p>
<p>I have recently released DTail 4.0.0 and this blog post goes through all the new goodies. You can also read my previous post about DTail in case you wonder what DTail is:</p>
<a class="textlink" href="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.html">DTail - The distributed log tail program</a><br />
@@ -265,7 +505,7 @@ const (
Trace level = iota
All level = iota
)
-</pre>
+</pre><br />
<p>DTail also supports multiple log outputs (e.g. to file or to stdout). More are now easily pluggable with the new logging package. The output can also be "enriched" (default) or "plain" (read more about that further below).</p>
<h3>Configurable terminal color codes</h3>
<p>A complaint I received from the users of DTail 3 were the terminal colors used for the output. Under some circumstances (terminal configuration) it made the output difficult to read so that users defaulted to "--noColor" (disabling colored output completely). I toke it by heart and also rewrote the color handling. It's now possible to configure the foreground and background colors and an attribute (e.g. dim, bold, ...).</p>
@@ -363,7 +603,7 @@ const (
},
...
}
-</pre>
+</pre><br />
<p>You notice the different sections - these are different contexts:</p>
<ul>
<li>Remote: Color configuration for all log lines sent remotely from the server to the client. </li>
@@ -375,40 +615,40 @@ const (
<p>When you do so, make sure that you check your "dtail.json" against the JSON schema file. This is to ensure that you don't configure an invalid color accidentally (requires "jsonschema" to be installed on your computer). Furthermore, the schema file is also a good reference for all possible colors available:</p>
<pre>
jsonschema -i dtail.json schemas/dtail.schema.json
-</pre>
+</pre><br />
<h3>Serverless mode</h3>
<p>All DTail commands can now operate on log files (and other text files) directly without any DTail server running. So there isn't a need anymore to install a DTail server when you are on the target server already anyway, like the following example shows:</p>
<pre>
% dtail --files /var/log/foo.log
-</pre>
+</pre><br />
<p>or</p>
<pre>
% dmap --files /var/log/foo.log --query 'from TABLE select .... outfile result.csv'
-</pre>
+</pre><br />
<p>The way it works in Go code is that a connection to a server is managed through an interface and in serverless mode DTail calls through that interface to the server code directly without any TCP/IP and SSH connection made in the background. This means, that the binaries are a bit larger (also ship with the code which normally would be executed by the server) but the increase of binary size is not much.</p>
<h3>Shorthand flags</h3>
<p>The "--files" from the previous example is now redundant. As a shorthand, It is now possible to do the following instead:</p>
<pre>
% dtail /var/log/foo.log
-</pre>
+</pre><br />
<p>Of course, this also works with all other DTail client commands (dgrep, dcat, ... etc).</p>
<h3>Spartan (aka plain) mode</h3>
<p>There's a plain mode, which makes DTail only print out the "plain" text of the files operated on (without any DTail specific enriched output). E.g.:</p>
<pre>
% dcat --plain /etc/passwd &gt; /etc/test
% diff /etc/test /etc/passwd # Same content, no diff
-</pre>
+</pre><br />
<p>This might be useful if you wanted to post-process the output. </p>
<h3>Standard input pipe</h3>
<p>In serverless mode, you might want to process your data in a pipeline. You can do that now too through an input pipe:</p>
<pre>
% dgrep --plain --regex 'somethingspecial' /var/log/foo.log |
dmap --query 'from TABLE select .... outfile result.csv'
-</pre>
+</pre><br />
<p>Or, use any other "standard" tool:</p>
<pre>
% awk '.....' &lt; /some/file | dtail ....
-</pre>
+</pre><br />
<h3>New command dtailhealth</h3>
<p>Prior to DTail 4, there was a flag for the "dtail" command to check the health of a remote DTail server (for use with monitoring system such as Nagios). That has been moved out to a separate binary to reduce complexity of the "dtail" command. The following checks whether DTail is operational on the current machine (you could also check a remote instance of DTail server, just adjust the server address).</p>
<pre>
@@ -416,7 +656,7 @@ jsonschema -i dtail.json schemas/dtail.schema.json
#!/bin/sh
exec /usr/local/bin/dtailhealth --server localhost:2222
-</pre>
+</pre><br />
<h3>Improved documentation</h3>
<p>Some features, such as custom log formats and the map-reduce query language, are now documented. Also, the examples have been updated to reflect the new features added. This also includes the new animated example Gifs (plus documentation how they were created).</p>
<p>I must admit that not all features are documented yet:</p>
@@ -432,7 +672,7 @@ exec /usr/local/bin/dtailhealth --server localhost:2222
<p>How are the tests implemented? All integration tests are simply unit tests in the "./integrationtests" folder. They must be explicitly activated with:</p>
<pre>
% export DTAIL_INTEGRATION_TEST_RUN_MODE=yes
-</pre>
+</pre><br />
<p>Once done, first compile all commands, and then run the integration tests:</p>
<pre>
% make
@@ -441,7 +681,7 @@ exec /usr/local/bin/dtailhealth --server localhost:2222
.
% go clean -testcache
% go test -race -v ./integrationtests
-</pre>
+</pre><br />
<h3>Improved code</h3>
<p>Not that the code quality of DTail has been bad (I have been using Go vet and Go lint for previous releases and will keep using these), but this time I had new tools (such as SonarQube and BlackDuck) in my arsenal to:</p>
<ul>
@@ -498,7 +738,7 @@ exec /usr/local/bin/dtailhealth --server localhost:2222
______( (_ / \______
(FL) ,' ,-----' | \
`--{__________) \/ "Berkeley Unix Daemon"
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2022-02-04, updated 2022-02-18</i></p>
<p>This is a list of Operating Systems I currently use. This list is in no particular order and also will be updated over time. The very first operating system I used was MS-DOS (mainly for games) and the very first Unix like operating system I used was SuSE Linux 5.3. My first smartphone OS was Symbian on a clunky Sony Ericsson device.</p>
<h2>Fedora Linux</h2>
@@ -533,7 +773,7 @@ root@rhea:/ # uname -a
GNU/kFreeBSD rhea.buetow.org 8.0-RELEASE-p5 FreeBSD 8.0-RELEASE-p5 #2: Sat Nov 27 13:10:09 CET
2010 root@saturn.buetow.org:/usr/obj/usr/srcs/freebsd.src8/src/sys/SERV10 x86 64 amd64 Intel(R)
Core(TM) i7 CPU 920 @ 2.67GHz GNU/kFreeBSD
-</pre>
+</pre><br />
<p>Currently, I use FreeBSD on my personal NAS server. The server is a regular PC with a bunch of hard drives and a ZFS RAIDZ (with 4x2TB drives) + a couple of external backup drives.</p>
<a class="textlink" href="https://www.FreeBSD.org">https://www.FreeBSD.org</a><br />
<h2>CentOS 7</h2>
@@ -652,7 +892,7 @@ GNU/kFreeBSD rhea.buetow.org 8.0-RELEASE-p5 FreeBSD 8.0-RELEASE-p5 #2: Sat Nov 2
| _| (_) | (_) | / / (_) | | | | __/
|_| \___/ \___(_)___\___/|_| |_|\___|
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2022-01-23</i></p>
<p>I don't count this as a real blog post, but more of an announcement (I aim to write one real post once monthly). From now on, "foo.zone" is the new address of this site. All other addresses will still forward to it and eventually (based on the traffic still going through) will be deactivated.</p>
<p>As you can read on Wikipedia, "foo" is, alongside to "bar" and "baz", a metasyntactic variable (you know what I mean if you are a programmer or IT person):</p>
@@ -702,7 +942,7 @@ GNU/kFreeBSD rhea.buetow.org 8.0-RELEASE-p5 FreeBSD 8.0-RELEASE-p5 #2: Sat Nov 2
/ / . / / .' . |
jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Art by Joan Stark, mod. by Paul Buetow
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2022-01-01, last updated at 2022-01-05</i></p>
<p>This is the second blog post about my Bash Golf series. This series is random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.</p>
<a class="textlink" href="https://foo.zone/gemfeed/2021-11-29-bash-golf-part-1.html">Bash Golf Part 1</a><br />
@@ -722,14 +962,14 @@ lrwx------. 1 paul paul 64 Nov 23 09:46 0 -&gt; /dev/pts/9
lrwx------. 1 paul paul 64 Nov 23 09:46 1 -&gt; /dev/pts/9
lrwx------. 1 paul paul 64 Nov 23 09:46 2 -&gt; /dev/pts/9
lr-x------. 1 paul paul 64 Nov 23 09:46 3 -&gt; /proc/162912/fd
-</pre>
+</pre><br />
<p>The following examples demonstrate two different ways to accomplish the same thing. The difference is that the first command is directly printing out "Foo" to stdout and the second command is explicitly redirecting stdout to its own stdout file descriptor:</p>
<pre>
❯ echo Foo
Foo
❯ echo Foo &gt; /proc/self/fd/0
Foo
-</pre>
+</pre><br />
<p>Other useful redirections are:</p>
<ul>
<li>Redirect stderr to stdin: "echo foo 2&gt;&amp;1"</li>
@@ -739,13 +979,13 @@ Foo
<pre>
❯ echo Foo 1&gt;&amp;2 2&gt;/dev/null
Foo
-</pre>
+</pre><br />
<p class="quote"><i>Update: A reader sent me an email and pointed out that the order of the redirections is important. </i></p>
<p>As you can see, the following will not print out anything:</p>
<pre>
❯ echo Foo 2&gt;/dev/null 1&gt;&amp;2
-</pre>
+</pre><br />
<p>A good description (also pointed out by the reader) can be found here:</p>
<a class="textlink" href="https://wiki.bash-hackers.org/howto/redirection_tutorial#order_of_redirection_ie_file_2_1_vs_2_1_file">Order of redirection</a><br />
<p>Ok, back to the original blog post. You can also use grouping here (neither of these commands will print out anything to stdout):</p>
@@ -755,7 +995,7 @@ Foo
❯ { { { echo Foo 1&gt;&amp;2; } 2&gt;&amp;1; } 1&gt;&amp;2; } 2&gt;/dev/null
❯ ( ( ( echo Foo 1&gt;&amp;2; ) 2&gt;&amp;1; ) 1&gt;&amp;2; ) 2&gt;/dev/null
-</pre>
+</pre><br />
<p>A handy way to list all open file descriptors is to use the "lsof" command (that's not a Bash built-in), whereas $$ is the process id (pid) of the current shell process:</p>
<pre>
❯ lsof -a -p $$ -d0,1,2
@@ -763,7 +1003,7 @@ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 62676 paul 0u CHR 136,9 0t0 12 /dev/pts/9
bash 62676 paul 1u CHR 136,9 0t0 12 /dev/pts/9
bash 62676 paul 2u CHR 136,9 0t0 12 /dev/pts/9
-</pre>
+</pre><br />
<p>Let's create our own descriptor "3" for redirection to a file named "foo":</p>
<pre>
❯ touch foo
@@ -778,7 +1018,7 @@ Bratwurst
❯ exec 3&gt;&amp;- # This closes fd 3.
❯ echo Steak &gt;&amp;3
-bash: 3: Bad file descriptor
-</pre>
+</pre><br />
<p>You can also override the default file descriptors, as the following example script demonstrates:</p>
<pre>
❯ cat grandmaster.sh
@@ -805,14 +1045,14 @@ echo Second line: $LINE2
# Restore default stdin and delete fd 6
exec 0&lt;&amp;6 6&lt;&amp;-
-</pre>
+</pre><br />
<p>Let's execute it:</p>
<pre>
❯ chmod 750 ./grandmaster.sh
❯ ./grandmaster.sh
First line: Learn You a Haskell
Second line: for Great Good
-</pre>
+</pre><br />
<h2>HERE</h2>
<p>I have mentioned HERE-documents and HERE-strings already in this post. Let's do some more examples. The following "cat" receives a multi line string from stdin. In this case, the input multi line string is a HERE-document. As you can see, it also interpolates variables (in this case the output of "date" running in a subshell).</p>
<pre>
@@ -822,7 +1062,7 @@ Second line: for Great Good
&gt; END
Hello World
It's Fri 26 Nov 08:46:52 GMT 2021
-</pre>
+</pre><br />
<p>You can also write it the following way, but that's less readable (it's good for an obfuscation contest):</p>
<pre>
❯ &lt;&lt;END cat
@@ -831,7 +1071,7 @@ It's Fri 26 Nov 08:46:52 GMT 2021
&gt; END
Hello Universe
It's Fri 26 Nov 08:47:32 GMT 2021
-</pre>
+</pre><br />
<p>Besides of an HERE-document, there is also a so-called HERE-string. Besides of...</p>
<pre>
❯ declare VAR=foo
@@ -839,24 +1079,24 @@ It's Fri 26 Nov 08:47:32 GMT 2021
&gt; echo '$VAR ontains foo'
&gt; fi
$VAR ontains foo
-</pre>
+</pre><br />
<p>...you can use a HERE-string like that:</p>
<pre>
❯ if grep -q foo &lt;&lt;&lt; "$VAR"; then
&gt; echo '$VAR contains foo'
&gt; fi
$VAR contains foo
-</pre>
+</pre><br />
<p>Or even shorter, you can do:</p>
<pre>
❯ grep -q foo &lt;&lt;&lt; "$VAR" &amp;&amp; echo '$VAR contains foo'
$VAR contains foo
-</pre>
+</pre><br />
<p>You can also use a Bash regex to accomplish the same thing, but the points of the examples so far were to demonstrate HERE-{documents,strings} and not Bash regular expressions:</p>
<pre>
❯ if [[ "$VAR" =~ foo ]]; then echo yay; fi
yay
-</pre>
+</pre><br />
<p>You can also use it with "read":</p>
<pre>
❯ read a &lt;&lt;&lt; ja
@@ -871,14 +1111,14 @@ NEIN!!!
Learn
❯ echo ${words[3]}
Golang
-</pre>
+</pre><br />
<p>The following is good for an obfuscation contest too:</p>
<pre>
❯ echo 'I like Perl too' &gt; perllove.txt
❯ cat - perllove.txt &lt;&lt;&lt; "$dumdidumstring"
Learn you a Golang for Great Good
I like Perl too
-</pre>
+</pre><br />
<h2>RANDOM</h2>
<p>Random is a special built-in variable containing a different pseudo random number each time it's used.</p>
<pre>
@@ -888,7 +1128,7 @@ I like Perl too
14997
❯ echo $RANDOM
9104
-</pre>
+</pre><br />
<p>That's very useful if you want to randomly delay the execution of your scripts when you run it on many servers concurrently, just to spread the server load (which might be caused by the script run) better.</p>
<p>Let's say you want to introduce a random delay of 1 minute. You can accomplish it with:</p>
<pre>
@@ -917,7 +1157,7 @@ main
❯ ./calc_answer_to_ultimate_question_in_life.sh
Delaying script execution for 42 seconds...
Continuing script execution...
-</pre>
+</pre><br />
<h2>set -x and set -e and pipefile</h2>
<p>In my opinion, -x and -e and pipefile are the most useful Bash options. Let's have a look at them one after another.</p>
<h3>-x</h3>
@@ -932,11 +1172,11 @@ Continuing script execution...
++ echo 121
+ echo 'Square of 11 is 121'
Square of 11 is 121
-</pre>
+</pre><br />
<p>You can also set it when calling an external script without modifying the script itself:</p>
<pre>
❯ bash -x ./half_broken_script_to_be_debugged.sh
-</pre>
+</pre><br />
<p>Let's do that on one of the example scripts we covered earlier:</p>
<pre>
❯ bash -x ./grandmaster.sh
@@ -954,21 +1194,21 @@ First line: Learn You a Haskell
Second line: for Great Good
+ exec
-</pre>
+</pre><br />
<h3>-e</h3>
<p>This is a very important option you want to use when you are paranoid. This means, you should always "set -e" in your scripts when you need to make absolutely sure that your script runs successfully (with that I mean that no command should exit with an unexpected status code).</p>
<p>Ok, let's dig deeper:</p>
<pre>
❯ help set | grep -- -e
-e Exit immediately if a command exits with a non-zero status.
-</pre>
+</pre><br />
<p>As you can see in the following example, the Bash terminates after the execution of "grep" as "foo" is not matching "bar". Therefore, grep exits with 1 (unsuccessfully) and the shell aborts. And therefore, "bar" will not be printed out anymore:</p>
<pre>
❯ bash -c 'set -e; echo hello; grep -q bar &lt;&lt;&lt; foo; echo bar'
hello
❯ echo $?
1
-</pre>
+</pre><br />
<p>Whereas the outcome changes when the regex matches:</p>
<pre>
❯ bash -c 'set -e; echo hello; grep -q bar &lt;&lt;&lt; barman; echo bar'
@@ -976,7 +1216,7 @@ hello
bar
❯ echo $?
0
-</pre>
+</pre><br />
<p>So does it mean that grep will always make the shell terminate whenever its exit code isn't 0? This will render "set -e" quite unusable. Frankly, there are other commands where an exit status other than 0 should not terminate the whole script abruptly. Usually, what you want is to branch your code based on the outcome (exit code) of a command:</p>
<pre>
❯ bash -c 'set -e
@@ -988,7 +1228,7 @@ bar
&gt; fi'
❯ echo $?
1
-</pre>
+</pre><br />
<p>...but the example above won't reach any of the branches and won't print out anything, as the script terminates right after grep.</p>
<p>The proper solution is to use grep as an expression in a conditional (e.g. in an if-else statement):</p>
<pre>
@@ -1010,7 +1250,7 @@ not matching
matching
❯ echo $?
0
-</pre>
+</pre><br />
<p>You can also temporally undo "set -e" if there is no other way:</p>
<pre>
❯ cat ./e.sh
@@ -1052,7 +1292,7 @@ foo
Hello World
Hello Universe
Hello You!
-</pre>
+</pre><br />
<p>Why does calling "foo" with no arguments make the script terminate? Because as no argument was given, the "shift" won't have anything to do as the argument list $@ is empty, and therefore "shift" fails with a non-zero status.</p>
<p>Why would you want to use "shift" after function-local variable assignments? Have a look at my personal Bash coding style guide for an explanation :-):</p>
<a class="textlink" href="https://foo.zone/gemfeed/2021-05-16-personal-bash-coding-style-guide.html">./2021-05-16-personal-bash-coding-style-guide.html</a><br />
@@ -1063,14 +1303,14 @@ Hello You!
pipefail the return value of a pipeline is the status of
the last command to exit with a non-zero status,
or zero if no command exited with a non-zero status
-</pre>
+</pre><br />
<p>The following greps for paul in passwd and converts all lowercase letters to uppercase letters. The exit code of the pipe is 0, as the last command of the pipe (converting from lowercase to uppercase) succeeded:</p>
<pre>
❯ grep paul /etc/passwd | tr '[a-z]' '[A-Z]'
PAUL:X:1000:1000:PAUL BUETOW:/HOME/PAUL:/BIN/BASH
❯ echo $?
0
-</pre>
+</pre><br />
<p>Let's look at another example, where "TheRock" doesn't exist in the passwd file. However, the pipes exit status is still 0 (success). This is so because the last command ("tr" in this case) still succeeded. It is just that it didn't get any input on stdin to process:</p>
<pre>
❯ grep TheRock /etc/passwd
@@ -1079,14 +1319,14 @@ PAUL:X:1000:1000:PAUL BUETOW:/HOME/PAUL:/BIN/BASH
❯ grep TheRock /etc/passwd | tr '[a-z]' '[A-Z]'
❯ echo $?
0
-</pre>
+</pre><br />
<p>To change this behaviour, pipefile can be used. Now, the pipes exit status is 1 (fail), because the pipe contains at least one command (in this case grep) which exited with status 1:</p>
<pre>
❯ set -o pipefail
❯ grep TheRock /etc/passwd | tr '[a-z]' '[A-Z]'
❯ echo $?
1
-</pre>
+</pre><br />
<p>E-Mail me your comments to paul at buetow dot org!</p>
</div>
</content>
@@ -1124,7 +1364,7 @@ PAUL:X:1000:1000:PAUL BUETOW:/HOME/PAUL:/BIN/BASH
||| \ __/_|| __||__
-----||-/------`-._/||-o--o---o---
~~~~~'
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2021-12-26, last updated at 2022-01-12</i></p>
<p>Log4shell (CVE-2021-44228) made it clear, once again, that working in information technology is not an easy job (especially when you are a DevOps person). I thought it would be interesting to summarize a few techniques to help you to relax.</p>
<p>(PS: When I mean DevOps, I also mean Site Reliability Engineers and Sysadmins. I believe SRE, DevOps Engineer and Sysadmin are just synonym titles for the same job).</p>
@@ -1206,10 +1446,10 @@ PAUL:X:1000:1000:PAUL BUETOW:/HOME/PAUL:/BIN/BASH
/ / .' |
jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Art by Joan Stark
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2021-11-29, last updated at 2022-01-05</i></p>
<p>This is the first blog post about my Bash Golf series. This series is about random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.</p>
-<a class="textlink" href="https://foo.zone/gemfeed/2021-11-29-bash-golf-part-1.html">Bash Golf Part 1 (you are reding this atm.)</a><br />
+<a class="textlink" href="https://foo.zone/gemfeed/2021-11-29-bash-golf-part-1.html">Bash Golf Part 1 (you are reading this atm.)</a><br />
<a class="textlink" href="https://foo.zone/gemfeed/2022-01-01-bash-golf-part-2.html">Bash Golf Part 2</a><br />
<h2>TCP/IP networking</h2>
<p>You probably know the Netcat tool, which is a swiss army knife for TCP/IP networking on the command line. But did you know that the Bash natively supports TCP/IP networking?</p>
@@ -1218,7 +1458,7 @@ jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
❯ cat &lt; /dev/tcp/time.nist.gov/13
59536 21-11-18 08:09:16 00 0 0 153.6 UTC(NIST) *
-</pre>
+</pre><br />
<p>The Bash treats /dev/tcp/HOST/PORT in a special way so that it is actually establishing a TCP connection to HOST:PORT. The example above redirects the TCP output of the time-server to cat and cat is printing it on standard output (stdout).</p>
<p>A more sophisticated example is firing up an HTTP request. Let's create a new read-write (rw) file descriptor (fd) 5, redirect the HTTP request string to it, and then read the response back:</p>
<pre>
@@ -1235,7 +1475,7 @@ Server: gws
Content-Length: 218
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
-</pre>
+</pre><br />
<p>You would assume that this also works with the ZSH, but it doesn't. This is one of the few things which don't work with the ZSH but in the Bash. There might be plugins you could use for ZSH to do something similar, though.</p>
<h2>Process substitution</h2>
<p>The idea here is, that you can read the output (stdout) of a command from a file descriptor:</p>
@@ -1256,7 +1496,7 @@ Access: 2021-11-20 10:59:31.482411961 +0000
Modify: 2021-11-20 10:59:31.482411961 +0000
Change: 2021-11-20 10:59:31.482411961 +0000
Birth: -
-</pre>
+</pre><br />
<p>This example doesn't make any sense practically speaking, but it clearly demonstrates how process substitution works. The standard output pipe of "uptime" is redirected to an anonymous file descriptor. That fd then is opened by the "cat" command as a regular file.</p>
<p>A useful use case is displaying the differences of two sorted files:</p>
<pre>
@@ -1278,11 +1518,11 @@ Change: 2021-11-20 10:59:31.482411961 +0000
❯ echo X &gt;&gt; /tmp/file-a.txt # Now, both files have the same content again.
❯ diff -u &lt;(sort /tmp/file-a.txt) &lt;(sort /tmp/file-b.txt)
-</pre>
+</pre><br />
<p>Another example is displaying the differences of two directories:</p>
<pre>
❯ diff -u &lt;(ls ./dir1/ | sort) &lt;(ls ./dir2/ | sort)
-</pre>
+</pre><br />
<p>More (Bash golfing) examples:</p>
<pre>
❯ wc -l &lt;(ls /tmp/) /etc/passwd &lt;(env)
@@ -1297,12 +1537,12 @@ Change: 2021-11-20 10:59:31.482411961 +0000
&gt; done &lt; &lt;(echo foo bar baz)
foo bar baz
-</pre>
+</pre><br />
<p>So far, we only used process substitution for stdout redirection. But it also works for stdin. The following two commands result into the same outcome, but the second one is writing the tar data stream to an anonymous file descriptor which is substituted by the "bzip2" command reading the data stream from stdin and compressing it to its own stdout, which then gets redirected to a file:</p>
<pre>
❯ tar cjf file.tar.bz2 foo
❯ tar cjf &gt;(bzip2 -c &gt; file.tar.bz2) foo
-</pre>
+</pre><br />
<p>Just think a while and see whether you understand fully what is happening here.</p>
<h2>Grouping</h2>
<p>Command grouping can be quite useful for combining the output of multiple commands:</p>
@@ -1311,7 +1551,7 @@ foo bar baz
97
❯ ( ls /tmp; cat /etc/passwd; env; ) | wc -l
97
-</pre>
+</pre><br />
<p>But wait, what is the difference between curly braces and normal braces? I assumed that the normal braces create a subprocess whereas the curly ones don't, but I was wrong:</p>
<pre>
❯ echo $$
@@ -1320,7 +1560,7 @@ foo bar baz
62676
❯ ( echo $$; )
62676
-</pre>
+</pre><br />
<p>One difference is, that the curly braces require you to end the last statement with a semicolon, whereas with the normal braces you can omit the last semicolon:</p>
<pre>
❯ ( env; ls ) | wc -l
@@ -1328,7 +1568,7 @@ foo bar baz
❯ { env; ls } | wc -l
&gt;
&gt; ^C
-</pre>
+</pre><br />
<p>In case you know more (subtle) differences, please write me an E-Mail and let me know.</p>
<p class="quote"><i>Update: A reader sent me an E-Mail and pointed me to the Bash manual page, which explains the difference between () and {} (I should have checked that by myself):</i></p>
<pre>
@@ -1344,19 +1584,19 @@ foo bar baz
ters ( and ), { and } are reserved words and must occur where a reserved word
is permitted to be recognized. Since they do not cause a word break, they
must be separated from list by whitespace or another shell metacharacter.
-</pre>
+</pre><br />
<p>So I was right that () is executed in a subprocess. But why does $$ not show a different PID? Also here (as pointed out by the reader) is the answer in the manual page:</p>
<pre>
$ Expands to the process ID of the shell. In a () subshell, it expands to the
process ID of the current shell, not the subshell.
-</pre>
+</pre><br />
<p>If we want print the subprocess PID, we can use the BASHPID variable:</p>
<pre>
❯ echo $BASHPID; { echo $BASHPID; }; ( echo $BASHPID; )
1028465
1028465
1028739
-</pre>
+</pre><br />
<h2>Expansions</h2>
<p>Let's start with simple examples:</p>
<pre>
@@ -1369,7 +1609,7 @@ $ Expands to the process ID of the shell. In a () subshell, it expands to
3
4
5
-</pre>
+</pre><br />
<p>You can also add leading 0 or expand to any number range:</p>
<pre>
❯ echo {00..05}
@@ -1378,29 +1618,29 @@ $ Expands to the process ID of the shell. In a () subshell, it expands to
000 001 002 003 004 005
❯ echo {201..205}
201 202 203 204 205
-</pre>
+</pre><br />
<p>It also works with letters:</p>
<pre>
❯ echo {a..e}
a b c d e
-</pre>
+</pre><br />
<p>Now it gets interesting. The following takes a list of words and expands it so that all words are quoted:</p>
<pre>
❯ echo \"{These,words,are,quoted}\"
"These" "words" "are" "quoted"
-</pre>
+</pre><br />
<p>Let's also expand to the cross product of two given lists:</p>
<pre>
❯ echo {one,two}\:{A,B,C}
one:A one:B one:C two:A two:B two:C
❯ echo \"{one,two}\:{A,B,C}\"
"one:A" "one:B" "one:C" "two:A" "two:B" "two:C"
-</pre>
+</pre><br />
<p>Just because we can:</p>
<pre>
❯ echo Linux-{one,two,three}\:{A,B,C}-FreeBSD
Linux-one:A-FreeBSD Linux-one:B-FreeBSD Linux-one:C-FreeBSD Linux-two:A-FreeBSD Linux-two:B-FreeBSD Linux-two:C-FreeBSD Linux-three:A-FreeBSD Linux-three:B-FreeBSD Linux-three:C-FreeBSD
-</pre>
+</pre><br />
<h2>- aka stdin and stdout placeholder</h2>
<p>Some commands and Bash builtins use "-" as a placeholder for stdin and stdout:</p>
<pre>
@@ -1414,7 +1654,7 @@ ONECHEESEBURGERPLEASE
Hello world
❯ cat - &lt;&lt;&lt; 'Hello world'
Hello world
-</pre>
+</pre><br />
<p>Let's walk through all three examples from the above snippet:</p>
<ul>
<li>The first example is obvious (the Bash builtin "echo" prints its arguments to stdout).</li>
@@ -1424,14 +1664,14 @@ Hello world
<p>The "tar" command understands "-" too. The following example tars up some local directory and sends the data to stdout (this is what "-f -" commands it to do). stdout then is piped via an SSH session to a remote tar process (running on buetow.org) and reads the data from stdin and extracts all the data coming from stdin (as we told tar with "-f -") on the remote machine:</p>
<pre>
❯ tar -czf - /some/dir | ssh hercules@buetow.org tar -xzvf -
-</pre>
+</pre><br />
<p>This is yet another example of using "-", but this time using the "file" command:</p>
<pre>
$ head -n 1 grandmaster.sh
#!/usr/bin/env bash
$ file - &lt; &lt;(head -n 1 grandmaster.sh)
/dev/stdin: a /usr/bin/env bash script, ASCII text executable
-</pre>
+</pre><br />
<p>Some more golfing:</p>
<pre>
$ cat -
@@ -1441,7 +1681,7 @@ hello
$ file -
#!/usr/bin/perl
/dev/stdin: Perl script text executable
-</pre>
+</pre><br />
<h2>Alternative argument passing</h2>
<p>This is a quite unusual way of passing arguments to a Bash script:</p>
<pre>
@@ -1450,7 +1690,7 @@ $ file -
declare -r USER=${USER:?Missing the username}
declare -r PASS=${PASS:?Missing the secret password for $USER}
echo $USER:$PASS
-</pre>
+</pre><br />
<p>So what we are doing here is to pass the arguments via environment variables to the script. The script will abort with an error when there's an undefined argument.</p>
<pre>
❯ chmod +x foo.sh
@@ -1462,17 +1702,17 @@ echo $USER:$PASS
1
❯ USER=paul PASS=secret ./foo.sh
paul:secret
-</pre>
+</pre><br />
<p>You have probably noticed this *strange* syntax:</p>
<pre>
❯ VARIABLE1=value1 VARIABLE2=value2 ./script.sh
-</pre>
+</pre><br />
<p>That's just another way to pass environment variables to a script. You can write it as well as like this:</p>
<pre>
❯ export VARIABLE1=value1
❯ export VARIABLE2=value2
❯ ./script.sh
-</pre>
+</pre><br />
<p>But the downside of it is that the variables will also be defined in your current shell environment and not just in the scripts sub-process.</p>
<h2>: aka the null command</h2>
<p>First, let's use the "help" Bash built-in to see what it says about the null command:</p>
@@ -1485,14 +1725,14 @@ paul:secret
Exit Status:
Always succeeds.
-</pre>
+</pre><br />
<p>PS: IMHO, people should use the Bash help more often. It is a very useful Bash reference. Too many fallbacks to a Google search and then land on Stack Overflow. Sadly, there's no help built-in for the ZSH shell though (so even when I am using the ZSH I make use of the Bash help as most of the built-ins are compatible). </p>
<p>OK, back to the null command. What happens when you try to run it? As you can see, absolutely nothing. And its exit status is 0 (success):</p>
<pre>
❯ :
❯ echo $?
0
-</pre>
+</pre><br />
<p>Why would that be useful? You can use it as a placeholder in an endless while-loop:</p>
<pre>
❯ while : ; do date; sleep 1; done
@@ -1501,7 +1741,7 @@ Sun 21 Nov 12:08:32 GMT 2021
Sun 21 Nov 12:08:33 GMT 2021
^C
-</pre>
+</pre><br />
<p>You can also use it as a placeholder for a function body not yet fully implemented, as an empty function ill result in a syntax error:</p>
<pre>
❯ foo () { }
@@ -1509,11 +1749,11 @@ Sun 21 Nov 12:08:33 GMT 2021
❯ foo () { :; }
❯ foo
-</pre>
+</pre><br />
<p>Or use it as a placeholder for not yet implemented conditional branches:</p>
<pre>
❯ if foo; then :; else echo bar; fi
-</pre>
+</pre><br />
<p>Or (not recommended) as a fancy way to comment your Bash code:</p>
<pre>
❯ : I am a comment and have no other effect
@@ -1521,7 +1761,7 @@ Sun 21 Nov 12:08:33 GMT 2021
-bash: syntax error near unexpected token `('
❯ : "I am a comment and don't result in a syntax error ()"
-</pre>
+</pre><br />
<p>As you can see in the previous example, the Bash still tries to interpret some syntax of all text following after ":". This can be exploited (also not recommended) like this:</p>
<pre>
❯ declare i=0
@@ -1532,7 +1772,7 @@ bash: 1: command not found...
❯ : $[ i = i + 1 ]
❯ echo $i
4
-</pre>
+</pre><br />
<p>For these kinds of expressions it's always better to use "let" though. And you should also use $((...expression...)) instead of the old (deprecated) way $[ ...expression... ] like this example demonstrates:</p>
<pre>
❯ declare j=0
@@ -1542,7 +1782,7 @@ bash: 1: command not found...
❯ let j=$((j + 1))
❯ echo $j
4
-</pre>
+</pre><br />
<h2>(No) floating point support</h2>
<p>I have to give a plus-point to the ZSH here. As the ZSH supports floating point calculation, whereas the Bash doesn't:</p>
<pre>
@@ -1555,13 +1795,13 @@ bash: line 1: 1/10.0 : syntax error: invalid arithmetic operator (error token is
❯ zsh -c 'echo $(( 1/10.0 ))'
0.10000000000000001
-</pre>
+</pre><br />
<p>It would be nice to have native floating point support for the Bash too, but you don't want to use the shell for complicated calculations anyway. So it's fine that Bash doesn't have that, I guess. </p>
<p>In the Bash you will have to fall back to an external command like "bc" (the arbitrary precision calculator language):</p>
<pre>
❯ bc &lt;&lt;&lt; 'scale=2; 1/10'
.10
-</pre>
+</pre><br />
<p>See you later for the next post of this series. E-Mail me your comments to paul at buetow dot org!</p>
</div>
</content>
@@ -1587,7 +1827,7 @@ bash: line 1: 1/10.0 : syntax error: invalid arithmetic operator (error token is
(__((__((___()()_____________________________________// |ACME |
(__((__((___()()()------------------------------------' |_____|
ASCII Art by Clyde Watson
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2021-10-22</i></p>
<p>I have seen many different setups and infrastructures during my carreer. My roles always included front-line ad-hoc fire fighting production issues. This often involves identifying and fixing these under time pressure, without the comfort of 2-week-long SCRUM sprints and without an exhaustive QA process. I also wrote a lot of code (Bash, Ruby, Perl, Go, and a little Java), and I followed the typical software development process, but that did not always apply to critical production issues.</p>
<p>Unfortunately, no system is 100% reliable, and you can never be prepared for a subset of the possible problem-space. IT infrastructures can be complex. Not even mentioning Kubernetes yet, a Microservice-based infrastructure can complicate things even further. You can take care of 99% of all potential problems by following all DevOps best practices. Those best practices are not the subject of this blog post; this post is about the sub 1% of the issues arising from nowhere you can't be prepared for. </p>
@@ -1671,7 +1911,7 @@ bash: line 1: 1/10.0 : syntax error: invalid arithmetic operator (error token is
/ ********** \ / ********** \
/ ************ \ / ************ \
-------------------- --------------------
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2021-09-12, last updated at 2022-04-21</i></p>
<p>A robust computer system must be kept simple and stupid (KISS). The fancier the system is, the more can break. Unfortunately, most systems tend to become complex and challenging to maintain in today's world. In the early days, so I was told, engineers understood every part of the system, but nowadays, we see more of the "lasagna" stack. One layer or framework is built on top of another layer, and in the end, nobody has got a clue what's going on.</p>
<h1>Need faster hardware</h1>
@@ -1738,7 +1978,7 @@ bash: line 1: 1/10.0 : syntax error: invalid arithmetic operator (error token is
| \___'.-`. '.
| | `---'
'^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^' LGB - Art by lgbearrd
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2021-08-01</i></p>
<p>I believe that it is essential to always have free and open-source alternatives to any kind of closed-source proprietary software available to choose from. But there are a couple of points you need to take into consideration. </p>
<h2>The costs of open-source</h2>
@@ -1863,7 +2103,7 @@ Hello Ruby
&gt;&gt; self.puts 'Hello World'
Hello World
=&gt; nil
-</pre>
+</pre><br />
<p>Ruby offers a lot of syntactic sugar and seemingly magic, but it all comes back to objects and messages to objects under the hood. As all is hidden in objects, you can unwrap and even change the magic and see what's happening under the hood. Then, suddenly everything makes so much sense.</p>
<h3>Functional programming</h3>
<p>Ruby embraces an object-oriented programming style. But there is good news for fans of the functional programming paradigm: From immutable data (frozen objects), pure functions, lambdas and higher-order functions, lazy evaluation, tail-recursion optimization, method chaining, currying and partial function application, all of that is there. I am delighted about that, as I am a big fan of functional programming (having played with Haskell and Standard ML before).</p>
@@ -1927,7 +2167,7 @@ Hello World
: \_``[]--[]|::::'\_;' )-'..`._ .-'\``:: ` . \
\___.&gt;`''-.||:.__,' SSt |_______`&gt; &lt;_____:::. . . \ _/
`+a:f:......jrei'''
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2021-06-05</i></p>
<p>You might have read my previous blog post about entering the Geminispace, where I pointed out the benefits of having and maintaining an internet presence there. This whole site (the blog and all other pages) is composed in the Gemtext markup language. </p>
<a class="textlink" href="https://foo.zone/gemfeed/2021-04-24-welcome-to-the-geminispace.html">Welcome to the Geminispace</a><br />
@@ -1965,7 +2205,7 @@ paul in uranus in gemtexter on 🌱 main
30 lib/log.source.sh
63 lib/md.source.sh
834 total
-</pre>
+</pre><br />
<p>This way, the script could grow far beyond 1000 lines of code and still be maintainable. With more features, execution speed may slowly become a problem, though. I already notice that Gemtexter doesn't produce results instantly but requires few seconds of runtime already. That's not a problem yet, though. </p>
<h3>Bash best practises and ShellCheck</h3>
<p>While working on Gemtexter, I also had a look at the Google Shell Style Guide and wrote a blog post on that:</p>
@@ -1985,13 +2225,13 @@ gemtext='=&gt; http://example.org Description of the link'
assert::equals "$(generate::make_link html "$gemtext")" \
'&lt;a class="textlink" href="http://example.org"&gt;Description of the link&lt;/a&gt;&lt;br /&gt;'
-</pre>
+</pre><br />
<h3>Markdown unit test example</h3>
<pre>
gemtext='=&gt; http://example.org Description of the link'
assert::equals "$(generate::make_link md "$gemtext")" \
'[Description of the link](http://example.org) '
-</pre>
+</pre><br />
<h2>Handcrafted HTML styles</h2>
<p>I had a look at some ready off the shelf CSS styles, but they all seemed too bloated. There is a whole industry selling CSS styles on the interweb. I preferred an effortless and minimalist style for the HTML site. So I handcrafted the Cascading Style Sheets manually with love and included them in the HTML header template. </p>
<p>For now, I have to re-generate all HTML files whenever the CSS changes. That should not be an issue now, but I might move the CSS into a separate file one day.</p>
@@ -2038,7 +2278,7 @@ assert::equals "$(generate::make_link md "$gemtext")" \
|____ []* ____ | ==||
// \\ // \\ |===|| hjw
"\__/"---------------"\__/"-+---+'
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2021-05-16</i></p>
<p>Lately, I have been polishing and writing a lot of Bash code. Not that I never wrote a lot of Bash, but now as I also looked through the Google Shell Style Guide, I thought it is time also to write my thoughts on that. I agree with that guide in most, but not in all points. </p>
<a class="textlink" href="https://google.github.io/styleguide/shellguide.html">Google Shell Style Guide</a><br />
@@ -2048,11 +2288,11 @@ assert::equals "$(generate::make_link md "$gemtext")" \
<p>Google recommends using always...</p>
<pre>
#!/bin/bash
-</pre>
+</pre><br />
<p>... as the shebang line, but that does not work on all Unix and Unix-like operating systems (e.g., the *BSDs don't have Bash installed to /bin/bash). Better is:</p>
<pre>
#!/usr/bin/env bash
-</pre>
+</pre><br />
<h3>Two space soft-tabs indentation</h3>
<p>I know there have been many tab- and soft-tab wars on this planet. Google recommends using two space soft-tabs for Bash scripts. </p>
<p>I don't care if I use two or four space indentations. I agree, however, that we should not use tabs. I tend to use four-space soft-tabs as that's how I currently configured Vim for any programming language. What matters most, though, is consistency within the same script/project.</p>
@@ -2069,7 +2309,7 @@ command1 \
| command2 \
| command3 \
| command4
-</pre>
+</pre><br />
<p>I think there is a better way like the following, which is less noisy. The pipe | already indicates the Bash that another command is expected, thus making the explicit line breaks with \ obsolete:</p>
<pre>
# Long commands
@@ -2077,7 +2317,7 @@ command1 |
command2 |
command3 |
command4
-</pre>
+</pre><br />
<h3>Quoting your variables</h3>
<p>Google recommends always quote your variables. Generally, it would be best if you did that only for variables where you are unsure about the content/values of the variables (e.g., content is from an external input source and may contain whitespace or other special characters). In my opinion, the code will become quite noisy when you always quote your variables like this:</p>
<pre>
@@ -2086,7 +2326,7 @@ greet () {
local -r name="${2}"
echo "${greeting} ${name}!"
}
-</pre>
+</pre><br />
<p>In this particular example, I agree that you should quote them as you don't know the input (are there, for example, whitespace characters?). But if you are sure that you are only using simple bare words, then I think that the code looks much cleaner when you do this instead:</p>
<pre>
say_hello_to_paul () {
@@ -2094,13 +2334,13 @@ say_hello_to_paul () {
local -r name=Paul
echo "$greeting $name!"
}
-</pre>
+</pre><br />
<p>You see, I also omitted the curly braces { } around the variables. I only use the curly braces around variables when it makes the code either easier/clearer to read or if it is necessary to use them:</p>
<pre>
declare FOO=bar
# Curly braces around FOO are necessary
echo "foo${FOO}baz"
-</pre>
+</pre><br />
<p>A few more words on always quoting the variables: For the sake of consistency (and for making ShellCheck happy), I am not against quoting everything I encounter. I also think that the larger the Bash script becomes, the more critical it becomes always to quote variables. That's because it will be more likely that you might not remember that some of the functions don't work on values with spaces in them, for example. It's just that I won't quote everything in every small script I write. </p>
<h3>Prefer built-in commands over external commands</h3>
<p>Google recommends using the built-in commands over available external commands where possible:</p>
@@ -2112,7 +2352,7 @@ substitution="${string/#foo/bar}"
# Instead of this:
addition="$(expr "${X}" + "${Y}")"
substitution="$(echo "${string}" | sed -e 's/^foo/bar/')"
-</pre>
+</pre><br />
<p>I can't entirely agree here. The external commands (especially sed) are much more sophisticated and powerful than the built-in Bash versions. Sed can do much more than the Bash can ever do by itself when it comes to text manipulation (the name "sed" stands for streaming editor, after all).</p>
<p>I prefer to do light text processing with the Bash built-ins and more complicated text processing with external programs such as sed, grep, awk, cut, and tr. However, there is also medium-light text processing where I would want to use external programs. That is so because I remember using them better than the Bash built-ins. The Bash can get relatively obscure here (even Perl will be more readable then - Side note: I love Perl).</p>
<p>Also, you would like to use an external command for floating-point calculation (e.g., bc) instead of using the Bash built-ins (worth noticing that ZSH supports built-in floating-points).</p>
@@ -2135,7 +2375,7 @@ buy_soda () {
}
buy_soda $I_NEED_THE_BUZZ
-</pre>
+</pre><br />
<h3>Non-evil alternative to variable assignments via eval</h3>
<p>Google is in the opinion that eval should be avoided. I think so too. They list these examples in their guide:</p>
<pre>
@@ -2146,7 +2386,7 @@ eval $(set_my_variables)
# What happens if one of the returned values has a space in it?
variable="$(eval some_function)"
-</pre>
+</pre><br />
<p>However, if I want to read variables from another file, I don't have to use eval here. I only have to source the file:</p>
<pre>
% cat vars.source.sh
@@ -2156,7 +2396,7 @@ declare bay=foo
% bash -c 'source vars.source.sh; echo $foo $bar $baz'
bar baz foo
-</pre>
+</pre><br />
<p>And suppose I want to assign variables dynamically. In that case, I could just run an external script and source its output (This is how you could do metaprogramming in Bash without the use of eval - write code which produces code for immediate execution):</p>
<pre>
% cat vars.sh
@@ -2168,7 +2408,7 @@ END
% bash -c 'source &lt;(./vars.sh); echo "Hello $user, it is $date"'
Hello paul, it is Sat 15 May 19:21:12 BST 2021
-</pre>
+</pre><br />
<p>The downside is that ShellCheck won't be able to follow the dynamic sourcing anymore.</p>
<h3>Prefer pipes over arrays for list processing</h3>
<p>When I do list processing in Bash, I prefer to use pipes. You can chain them through Bash functions as well, which is pretty neat. Usually, my list processing scripts are of a structure like this:</p>
@@ -2206,7 +2446,7 @@ main () {
}
main
-</pre>
+</pre><br />
<p>The stdout is always passed as a pipe to the next following stage. The stderr is used for info logging.</p>
<h3>Assign-then-shift</h3>
<p>I often refactor existing Bash code. That leads me to add and removing function arguments quite often. It's pretty repetitive work changing the $1, $2.... function argument numbers every time you change the order or add/remove possible arguments.</p>
@@ -2218,7 +2458,7 @@ some_function () {
local -r param_bay="$1"; shift
...
}
-</pre>
+</pre><br />
<p>Want to add a param_baz? Just do this:</p>
<pre>
some_function () {
@@ -2228,7 +2468,7 @@ some_function () {
local -r param_bay="$1"; shift
...
}
-</pre>
+</pre><br />
<p>Want to remove param_foo? Nothing easier than that:</p>
<pre>
some_function () {
@@ -2237,7 +2477,7 @@ some_function () {
local -r param_bay="$1"; shift
...
}
-</pre>
+</pre><br />
<p>As you can see, I didn't need to change any other assignments within the function. Of course, you would also need to change the function argument lists at every occasion where the function is invoked - you would do that within the same refactoring session.</p>
<h3>Paranoid mode</h3>
<p>I call this the paranoid mode. The Bash will stop executing when a command exits with a status not equal to 0:</p>
@@ -2245,7 +2485,7 @@ some_function () {
set -e
grep -q foo &lt;&lt;&lt; bar
echo Jo
-</pre>
+</pre><br />
<p>Here 'Jo' will never be printed out as the grep didn't find any match. It's unrealistic for most scripts to run in paranoid mode purely, so there must be a way to add exceptions. Critical Bash scripts of mine tend to look like this:</p>
<pre>
#!/usr/bin/env bash
@@ -2268,7 +2508,7 @@ some_function () {
fi
...
}
-</pre>
+</pre><br />
<h2>Learned</h2>
<p>There are also a couple of things I've learned from Google's guide.</p>
<h3>Unintended lexicographical comparison.</h3>
@@ -2278,19 +2518,19 @@ if [[ "${my_var}" &gt; 3 ]]; then
# True for 4, false for 22.
do_something
fi
-</pre>
+</pre><br />
<p>... but it is probably an unintended lexicographical comparison. A correct way would be:</p>
<pre>
if (( my_var &gt; 3 )); then
do_something
fi
-</pre>
+</pre><br />
<p>or</p>
<pre>
if [[ "${my_var}" -gt 3 ]]; then
do_something
fi
-</pre>
+</pre><br />
<h3>PIPESTATUS</h3>
<p>I have never used the PIPESTATUS variable before. I knew that it's there, but I never bothered to understand how it works until now thoroughly.</p>
<p>The PIPESTATUS variable in Bash allows checking of the return code from all parts of a pipe. If it's only necessary to check the success or failure of the whole pipe, then the following is acceptable:</p>
@@ -2299,7 +2539,7 @@ tar -cf - ./* | ( cd "${dir}" &amp;&amp; tar -xf - )
if (( PIPESTATUS[0] != 0 || PIPESTATUS[1] != 0 )); then
echo "Unable to tar files to ${dir}" &gt;&amp;2
fi
-</pre>
+</pre><br />
<p>However, as PIPESTATUS will be overwritten as soon as you do any other command, if you need to act differently on errors based on where it happened in the pipe, you'll need to assign PIPESTATUS to another variable immediately after running the command (don't forget that [ is a command and will wipe out PIPESTATUS).</p>
<pre>
tar -cf - ./* | ( cd "${DIR}" &amp;&amp; tar -xf - )
@@ -2310,7 +2550,7 @@ fi
if (( return_codes[1] != 0 )); then
do_something_else
fi
-</pre>
+</pre><br />
<h2>Use common sense and BE CONSISTENT.</h2>
<p>The following two paragraphs are thoroughly quoted from the Google guidelines. But they hit the hammer on the head:</p>
<p class="quote"><i>If you are editing code, take a few minutes to look at the code around you and determine its style. If they use spaces around their if clauses, you should, too. If their comments have little boxes of stars around them, make your comments have little boxes of stars around them too.</i></p>
@@ -2357,7 +2597,7 @@ fi
''( .'\.' ' .;'
'.;.;' ;'.;' ..;;' AsH
-</pre>
+</pre><br />
<h2>Motivation</h2>
<h3>My urge to revamp my personal website</h3>
<p>For some time, I had to urge to revamp my personal website. Not to update the technology and its design but to update all the content (+ keep it current) and start a small tech blog again. So unconsciously, I began to search for an excellent platform to do all of that in a KISS (keep it simple &amp; stupid) way.</p>
@@ -2443,7 +2683,7 @@ fi
<p>The following example would connect to all DTail servers listed in the serverlist.txt, follow all files with the ending .log and filter for lines containing the string error. You can specify any Go compatible regular expression. In this example we add the case-insensitive flag to the regex:</p>
<pre>
dtail –servers serverlist.txt –files ‘/var/log/*.log’ –regex ‘(?i:error)’
-</pre>
+</pre><br />
<p>You usually want to specify a regular expression as a client argument. This will mean that responses are pre-filtered for all matching lines on the server-side and thus sending back only the relevant lines to the client. If your logs are growing very rapidly and the regex is not specific enough there might be the chance that your client is not fast enough to keep up processing all of the responses. This could be due to a network bottleneck or just as simple as a slow terminal emulator displaying the log lines on the client-side.</p>
<p>A green 100 in the client output before each log line received from the server always indicates that there were no such problems and 100% of all log lines could be displayed on your terminal (have a look at the animated Gifs in this post). If the percentage falls below 100 it means that some of the channels used by the servers to send data to the client are congested and lines were dropped. In this case, the color will change from green to red. The user then could decide to run the same query but with a more specific regex.</p>
<p>You could also provide a comma-separated list of servers as opposed to a text file. There are many more options you could use. The ones listed here are just the very basic ones. There are more instructions and usage examples on the GitHub page. Also, you can study even more of the available options via the –help switch (some real treasures might be hidden there).</p>
@@ -2495,7 +2735,7 @@ dtail –servers serverlist.txt –files ‘/var/log/*.log’ –regex ‘(?i:er
| \ )|_
/`\_`&gt; &lt;_/ \
jgs\__/'---'\__/
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2018-06-01, last updated at 2021-05-08</i></p>
<h2>Foreword</h2>
<p>This text first was published in the german IT-Administrator computer Magazine. 3 years have passed since and I decided to publish it on my blog too. </p>
@@ -2519,7 +2759,7 @@ jgs\__/'---'\__/
<p>I/O Riot was tested under CentOS 7.2 x86_64. For compiling, the GNU C compiler and Systemtap including kernel debug information are required. Other Linux distributions are theoretically compatible but untested. First of all, you should update the systems involved as follows:</p>
<pre>
% sudo yum update
-</pre>
+</pre><br />
<p>If the kernel is updated, please restart the system. The installation would be done without a restart but this would complicate the installation. The installed kernel version should always correspond to the currently running kernel. You can then install I/O Riot as follows:</p>
<pre>
% sudo yum install gcc git systemtap yum-utils kernel-devel-$(uname -r)
@@ -2529,27 +2769,27 @@ jgs\__/'---'\__/
% make
% sudo make install
% export PATH=$PATH:/opt/ioriot/bin
-</pre>
+</pre><br />
<p>Note: It is not best practice to install any compilers on production systems. For further information please have a look at the enclosed README.md.</p>
<h3>Recording of I/O events</h3>
<p>All I/O events are kernel related. If a process wants to perform an I/O operation, such as opening a file, it must inform the kernel of this by a system call (short syscall). I/O Riot relies on the Systemtap tool to record I/O syscalls. Systemtap, available for all popular Linux distributions, helps you to take a look at the running kernel in productive environments, which makes it predestined to monitor all I/O-relevant Linux syscalls and log them to a file. Other tools, such as strace, are not an alternative because they slow down the system too much.</p>
<p>During recording, ioriot acts as a wrapper and executes all relevant Systemtap commands for you. Use the following command to log all events to io.capture:</p>
<pre>
% sudo ioriot -c io.capture
-</pre>
+</pre><br />
<a href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png"><img alt="Screenshot I/O recording" title="Screenshot I/O recording" src="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png" /></a><br />
<p>A Ctrl-C (SIGINT) stops recording prematurely. Otherwise, ioriot terminates itself automatically after 1 hour. Depending on the system load, the output file can grow to several gigabytes. Only metadata is logged, not the read and written data itself. When replaying later, only random data is used. Under certain circumstances, Systemtap may omit some system calls and issue warnings. This is to ensure that Systemtap does not consume too many resources.</p>
<h3>Test preparation</h3>
<p>Then copy io.capture to a test system. The log also contains all accesses to the pseudo file systems devfs, sysfs and procfs. This makes little sense, which is why you must first generate a cleaned and playable version io.replay from io.capture as follows:</p>
<pre>
% sudo ioriot -c io.capture -r io.replay -u $USER -n TESTNAME
-</pre>
+</pre><br />
<p>The parameter -n allows you to assign a freely selectable test name. An arbitrary system user under which the test is to be played is specified via paramater -u.</p>
<h3>Test Initialization</h3>
<p>The test will most likely want to access existing files. These are files the test wants to read but does not create by itself. The existence of these must be ensured before the test. You can do this as follows:</p>
<pre>
% sudo ioriot -i io.replay
-</pre>
+</pre><br />
<p>To avoid any damage to the running system, ioreplay only works in special directories. The tool creates a separate subdirectory for each file system mount point (e.g. /, /usr/local, /store/00,...) (here: /.ioriot/TESTNAME, /usr/local/.ioriot/TESTNAME, /store/00/.ioriot/TESTNAME,...). By default, the working directory of ioriot is /usr/local/ioriot/TESTNAME.</p>
<a href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png"><img alt="Screenshot test preparation" title="Screenshot test preparation" src="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png" /></a><br />
<p>You must re-initialize the environment before each run. Data from previous tests will be moved to a trash directory automatically, which can be finally deleted with "sudo ioriot -P".</p>
@@ -2569,7 +2809,7 @@ done
echo $new_scheduler | sudo tee $scheduler
done
% sudo ioriot -R io.replay -S deadline.txt
-</pre>
+</pre><br />
<p>According to the results, the test could run 940 seconds faster with Deadline Scheduler:</p>
<pre>
% cat cfq.txt
@@ -2590,7 +2830,7 @@ Performed ioops: 218624596
Average ioops/s: 180234.62
Time ahead: 2392s
Total time: 1213.00s
-</pre>
+</pre><br />
<p>In any case, you should also set up a time series database, such as Graphite, where the I/O throughput can be plotted. Figures 4 and 5 show the read and write access times of both tests. The break-in makes it clear when the CFQ test ended and the deadline test was started. The reading latency of both tests is similar. Write latency is dramatically improved using the Deadline Scheduler.</p>
<a href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png"><img alt="Graphite visualization of the mean read access times in ms with CFQ and Deadline Scheduler." title="Graphite visualization of the mean read access times in ms with CFQ and Deadline Scheduler." src="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png" /></a><br />
<a href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png"><img alt="Graphite visualization of the average write access times in ms with CFQ and Deadline Scheduler." title="Graphite visualization of the average write access times in ms with CFQ and Deadline Scheduler." src="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png" /></a><br />
@@ -2632,7 +2872,7 @@ Total time: 1213.00s
| |_| | |_| | __/_____| |___
\___/ \___/|_| \____|
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2016-11-20, updated 2022-01-29</i></p>
<p>You can do a little of object-oriented programming in the C Programming Language. However, that is, in my humble opinion, limited. It's easier to use a different programming language than C for OOP. But still it's an interesting exercise to try using C for this.</p>
<h2>Function pointers</h2>
@@ -2669,30 +2909,30 @@ int main(void) {
printf("%s(%f, %f) =&gt; %f\n", mult.name, a, b, mult.calculate(a,b));
printf("%s(%f, %f) =&gt; %f\n", div.name, a, b, div.calculate(a,b));
}
-</pre>
+</pre><br />
<p>As you can see, you can call the function (pointed by the function pointer) with the same syntax as in C++ or Java:</p>
<pre>
printf("%s(%f, %f) =&gt; %f\n", mult.name, a, b, mult.calculate(a,b));
printf("%s(%f, %f) =&gt; %f\n", div.name, a, b, div.calculate(a,b));
-</pre>
+</pre><br />
<p>However, that's just syntactic sugar for:</p>
<pre>
printf("%s(%f, %f) =&gt; %f\n", mult.name, a, b, (*mult.calculate)(a,b));
printf("%s(%f, %f) =&gt; %f\n", div.name, a, b, (*div.calculate)(a,b));
-</pre>
+</pre><br />
<p>Output:</p>
<pre>
pbuetow ~/git/blog/source [38268]% gcc oop-c-example.c -o oop-c-example
pbuetow ~/git/blog/source [38269]% ./oop-c-example
Multiplication(3.000000, 2.000000) =&gt; 6.000000
Division(3.000000, 2.000000) =&gt; 1.500000
-</pre>
+</pre><br />
<p>Not complicated at all, but nice to know and helps to make the code easier to read!</p>
<h2>That's not OOP, though</h2>
<p>However, that's not really how it works in object-oriented languages such as Java and C++. The method call in this example is not a method call as "mult" and "div" in this example are not "message receivers". I mean that the functions can not access the state of the "mult" and "div" struct objects. In C, you would need to do something like this instead if you wanted to access the state of "mult" from within the calculate function, you would have to pass it as an argument:</p>
<pre>
mult.calculate(mult,a,b));
-</pre>
+</pre><br />
<h2>Real object oriented programming with C</h2>
<p>If you want to take it further, hit "Object-Oriented Programming with ANSI-C" into your favourite internet search engine or follow the link below. It goes as far as writing a C preprocessor in AWK, which takes some object-oriented pseudo-C and transforms it to plain C so that the C compiler can compile it to machine code. This is similar to how the C++ language had its origins.</p>
<a class="textlink" href="https://www.cs.rit.edu/~ats/books/ooc.pdf">https://www.cs.rit.edu/~ats/books/ooc.pdf</a><br />
@@ -2754,7 +2994,7 @@ class { 'jail':
.
}
}
-</pre>
+</pre><br />
<h2>PF firewall</h2>
<p>Please note that "dns.ian.buetow.org" is just the Jail name of the master DNS server (and "caprica.ian.buetow.org" the name of the Jail for the slave DNS server) and that I am using the DNS names "dns1.buetow.org" (master) and "dns2.buetow.org" (slave) for the actual service names (these are the DNS servers visible to the public). Please also note that the IPv4 address is an internal one. I have a PF to use NAT and PAT. The DNS ports are being forwarded (TCP and UDP) to that Jail. By default, all ports are blocked, so I am adding an exception rule for the IPv6 address. These are the PF rules in use:</p>
<pre>
@@ -2768,7 +3008,7 @@ pass in on re0 inet6 proto tcp from any to 2a01:4f8:120:30e8::15 port {53} flags
pass in on re0 inet6 proto udp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state
.
.
-</pre>
+</pre><br />
<h2>Puppet managed BIND zone files</h2>
<p>In "manifests/dns.pp" (the Puppet manifest for the Master DNS Jail itself), I configured the BIND DNS server this way:</p>
<pre>
@@ -2776,7 +3016,7 @@ class { 'bind_freebsd':
config =&gt; "puppet:///files/bind/named.${::hostname}.conf",
dynamic_config =&gt; "puppet:///files/bind/dynamic.${::hostname}",
}
-</pre>
+</pre><br />
<p>The Puppet module is a pretty simple one. It installs the file "/usr/local/etc/named/named.conf" and it populates the "/usr/local/etc/named/dynamicdb" directory with all my zone files.</p>
<p>Once (Puppet-) applied inside of the Jail, I get this:</p>
<pre>
@@ -2819,7 +3059,7 @@ dns2 86400 IN AAAA 2a03:2500:1:6:20::
.
.
.
-</pre>
+</pre><br />
<p>That is my master DNS server. My slave DNS server runs in another Jail on another bare-metal machine. Everything is set up similar to the master DNS server. However, that server is located in a different DC and different IP subnets. The only difference is the "named.conf". It's configured to be a slave, and that means that the "dynamicdb" gets populated by BIND itself while doing zone transfers from the master.</p>
<pre>
paul uranus:~/git/blog/source [4279]% ssh admin@dns2.buetow.org tail -n 11 /usr/local/etc/namedb/named.conf
@@ -2834,7 +3074,7 @@ zone "buetow.zone" {
masters { 78.46.80.70; };
file "/usr/local/etc/namedb/dynamic/buetow.zone";
};
-</pre>
+</pre><br />
<h2>The result</h2>
<p>The result looks like this now:</p>
<pre>
@@ -2891,7 +3131,7 @@ dns2.buetow.org. 86400 IN AAAA 2a03:2500:1:6:20::
;; SERVER: 78.46.80.70#53(78.46.80.70)
;; WHEN: Sun May 22 11:34:41 BST 2016
;; MSG SIZE rcvd: 322
-</pre>
+</pre><br />
<h2>Monitoring</h2>
<p>For monitoring, I am using Icinga2 (I am operating two Icinga2 instances in two different DCs). I may have to post another blog article about Icinga2, but to get the idea, these were the snippets added to my Icinga2 configuration:</p>
<pre>
@@ -2915,7 +3155,7 @@ apply Service "dig6" {
assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org"
}
-</pre>
+</pre><br />
<h2>DNS update workflow</h2>
<p>Whenever I have to change a DNS entry, all I have to do is:</p>
<ul>
@@ -2955,9 +3195,10 @@ apply Service "dig6" {
\____||__|_____|_| | __ | |
| || | | |
\____||__|_____|__|
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2016-04-16</i></p>
-<a class="textlink" href="https://foo.zone/gemfeed/2016-04-03-offsite-backup-with-zfs.html">Read the first part before reading any furter here...</a><br />
+<a class="textlink" href="https://foo.zone/gemfeed/2016-04-03-offsite-backup-with-zfs.html">Offsite backup with ZFS Part 1</a><br />
+<a class="textlink" href="https://foo.zone/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.html">Offsite backup with ZFS Part 2 (you are reading this atm.)</a><br />
<p>I enhanced the procedure a bit. From now on, I have two external 2TB USB hard drives. Both are set up precisely the same way. To decrease the probability that both drives will not fail simultaneously, they are of different brands. One drive is kept at a secret location. The other one is held at home, right next to my HP MicroServer.</p>
<p>Whenever I update the offsite backup, I am doing it to the drive, which is kept locally. Afterwards, I bring it to the secret location, swap the drives, and bring the other back home. This ensures that I will always have an offsite backup available at a different location than my home - even while updating one copy of it.</p>
<p>Furthermore, I added scrubbing ("zpool scrub...") to the script. It ensures that the file system is consistent and that there are no bad blocks on the disk and the file system. To increase the reliability, I also run a "zfs set copies=2 zroot". That setting is also synchronized to the offsite ZFS pool. ZFS stores every data block to disk twice now. Yes, it consumes twice as much disk space, making it better fault-tolerant against hardware errors (e.g. only individual disk sectors going bad). </p>
@@ -2996,7 +3237,7 @@ apply Service "dig6" {
\ \
\ `. hjw
\ `.
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2016-04-09</i></p>
<p>Over the last couple of years I wrote quite a few Puppet modules in order to manage my personal server infrastructure. One of them manages FreeBSD Jails and another one ZFS file systems. I thought I would give a brief overview in how it looks and feels.</p>
<h2>ZFS</h2>
@@ -3009,7 +3250,7 @@ zfs::create { 'ztank/foo':
require =&gt; File['/srv'],
}
-</pre>
+</pre><br />
<p>Puppet run:</p>
<pre>
admin alphacentauri:/opt/git/server/puppet/manifests [1212]% puppet.apply
@@ -3029,7 +3270,7 @@ ztank/foo 96K 1.13T 96K /srv/foo
admin alphacentauri:~ [1214]% df | grep foo
ztank/foo 1214493520 96 1214493424 0% /srv/foo
admin alphacentauri:~ [1215]%
-</pre>
+</pre><br />
<p>The destruction of the file system just requires to set "ensure" to "absent" in Puppet:</p>
<pre>
zfs::create { 'ztank/foo':
@@ -3038,7 +3279,7 @@ zfs::create { 'ztank/foo':
require =&gt; File['/srv'],
-</pre>
+</pre><br />
<p>Puppet run:</p>
<pre>
admin alphacentauri:/opt/git/server/puppet/manifests [1220]% puppet.apply
@@ -3059,7 +3300,7 @@ zsh: exit 1 grep foo
admin alphacentauri:/opt/git/server/puppet/manifests [1222:1]% df | grep foo
zsh: done df |
zsh: exit 1 grep foo
-</pre>
+</pre><br />
<h2>Jails</h2>
<p>Here is an example in how a FreeBSD Jail can be created. The Jail will have its own public IPv6 address. And it will have its own internal IPv4 address with IPv4 NAT to the internet (this is due to the limitation that the host server only got one public IPv4 address which requires sharing between all the Jails).</p>
<p>Furthermore, Puppet will ensure that the Jail will have its own ZFS file system (internally it is using the ZFS module). Please notice that the NAT requires the packet filter to be setup correctly (not covered in this blog post).</p>
@@ -3099,7 +3340,7 @@ class { 'jail':
},
}
}
-</pre>
+</pre><br />
<p>This is how the result looks like:</p>
<pre>
admin sun:/etc [1939]% puppet.apply
@@ -3190,7 +3431,7 @@ lo0: flags=8049&lt;UP,LOOPBACK,RUNNING,MULTICAST&gt; metric 0 mtu 16384
options=600003&lt;RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6&gt;
inet 192.168.0.17 netmask 0xffffffff
nd6 options=29&lt;PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL&gt;
-</pre>
+</pre><br />
<h2>Inside-Jail Puppet</h2>
<p>To automatically setup the applications running in the Jail I am using Puppet as well. I wrote a few scripts which bootstrap Puppet inside of a newly created Jail. It is doing the following:</p>
<ul>
@@ -3328,7 +3569,7 @@ Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/.task]
.
.
Notice: Finished catalog run in 206.09 seconds
-</pre>
+</pre><br />
<h2>Managing multiple Jails</h2>
<p>Of course I am operating multiple Jails on the same host this way with Puppet:</p>
<ul>
@@ -3368,8 +3609,10 @@ Notice: Finished catalog run in 206.09 seconds
| | __ | |
| || | | |
\____||__|_____|__|
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2016-04-03</i></p>
+<a class="textlink" href="https://foo.zone/gemfeed/2016-04-03-offsite-backup-with-zfs.html">Offsite backup with ZFS Part 1 (you are reading this atm.)</a><br />
+<a class="textlink" href="https://foo.zone/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.html">Offsite backup with ZFS Part 2</a><br />
<h2>Please don't lose all my pictures again!</h2>
<p>When it comes to data storage and potential data loss, I am a paranoid person. It is due to my job and a personal experience I encountered over ten years ago: A single drive failure and loss of all my data (pictures, music, etc.).</p>
<p>A little about my personal infrastructure: I am running my own (mostly FreeBSD based) root servers (across several countries: Two in Germany, one in Canada, one in Bulgaria) which store all my online data (E-Mail and my Git repositories). I am syncing incremental (and encrypted) ZFS snapshots between these servers forth and back so either data can be recovered from the other server.</p>
@@ -3405,7 +3648,7 @@ Notice: Finished catalog run in 206.09 seconds
| |_| | __/ |_) | | | (_) | | (_| |
|____/ \___|_.__/|_| \___/|_|\__,_|
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2015-12-05, last updated at 2021-05-16</i></p>
<p>You can use the following tutorial to install a full-blown Debian GNU/Linux Chroot on an LG G3 D855 CyanogenMod 13 (Android 6). First of all, you need to have root permissions on your phone, and you also need to have the developer mode activated. The following steps have been tested on Linux (Fedora 23).</p>
<a href="https://foo.zone/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid/Deboroid.png"><img src="https://foo.zone/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid/Deboroid.png" /></a><br />
@@ -3435,7 +3678,7 @@ sudo debootstrap --foreign --variant=minbase \
--arch armel jessie jessie/ \
http://http.debian.net/debian
sudo umount jessie
-</pre>
+</pre><br />
<h3>Copy Debian image to the phone</h3>
<p>Now setup the Debian image on an external SD card on the Phone via Android Debugger as follows:</p>
<pre>
@@ -3475,7 +3718,7 @@ busybox mount --bind /storage/sdcard1 \
# Check mounts
mount | grep jessie
-</pre>
+</pre><br />
<h3>Second debootstrap stage</h3>
<p>This is to be performed on the Android phone itself (inside a Debian chroot):</p>
<pre>
@@ -3484,7 +3727,7 @@ export PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin
/debootstrap/debootstrap --second-stage
exit # Leave chroot
exit # Leave adb shell
-</pre>
+</pre><br />
<h3>Setup of various scripts</h3>
<p>jessie.sh deals with all the loopback mount magic and so on. It will be run later every time you start Debroid on your phone.</p>
<pre>
@@ -3518,7 +3761,7 @@ apt-get update
apt-get upgrade
apt-get dist-upgrade
exit # Exit chroot
-</pre>
+</pre><br />
<h3>Entering Debroid and enable a service</h3>
<p>This enters Debroid on your phone and starts the example service uptimed:</p>
<pre>
@@ -3535,7 +3778,7 @@ END
chmod 0755 /etc/rc.debroid
exit # Exit chroot
exit # Exit adb shell
-</pre>
+</pre><br />
<h3>Include to Android startup:</h3>
<p>If you want to start Debroid automatically whenever your phone starts, then do the following:</p>
<pre>
@@ -3543,7 +3786,7 @@ adb push data/local/userinit.sh /data/local/userinit.sh
adb shell
chmod +x /data/local/userinit.sh
exit
-</pre>
+</pre><br />
<p>Reboot &amp; test! Enjoy!</p>
<p>E-Mail me your comments to paul at buetow dot org!</p>
</div>
@@ -3609,7 +3852,7 @@ BEGIN {
$i++;
}
}
-</pre>
+</pre><br />
<p>You can find the full source code at GitHub:</p>
<a class="textlink" href="https://codeberg.org/snonux/perl-c-fibonacci">https://codeberg.org/snonux/perl-c-fibonacci</a><br />
<h3>Let's run it with C and C++</h3>
@@ -3649,7 +3892,7 @@ fib(7) = 13
fib(8) = 21
fib(9) = 34
fib(10) = 55
-</pre>
+</pre><br />
<h3>Let's run it with Perl and Raku</h3>
<pre>
% perl fibonacci.pl.raku.c
@@ -3685,7 +3928,7 @@ fib(7) = 13
fib(8) = 21
fib(9) = 34
fib(10) = 55
-</pre>
+</pre><br />
<p>It's entertaining to play with :-).</p>
<p>E-Mail me your comments to paul at buetow dot org!</p>
</div>
@@ -3709,7 +3952,7 @@ fib(10) = 55
\\_/ \ \\_/ \ \\_/ \.-,
\, /-( /'-,\, /-( /'-, \, /-( /
//\ //\\ //\ //\\ //\ //\\jrei
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2011-05-07, last updated at 2021-05-07</i></p>
<p>PerlDaemon is a minimal daemon for Linux and other Unix like operating systems programmed in Perl. It is a minimal but pretty functional and fairly generic service framework. This means that it does not do anything useful other than providing a framework for starting, stopping, configuring and logging. To do something useful, a module (written in Perl) must be provided.</p>
<h2>Features</h2>
@@ -3734,7 +3977,7 @@ fib(10) = 55
# Alternatively: Starting in foreground
./bin/perldaemon start daemon.daemonize=no (or shortcut ./control foreground)
-</pre>
+</pre><br />
<p>To stop a daemon from running in foreground mode, "Ctrl+C" must be hit. To see more available startup options run "./control" without any argument.</p>
<h2>How to configure</h2>
<p>The daemon instance can be configured in "./conf/perldaemon.conf". If you want to change a property only once, it is also possible to specify it on the command line (which will take precedence over the config file). All available config properties can be displayed via "./control keys":</p>
@@ -3763,7 +4006,7 @@ daemon.alivefile=./run/perldaemon.alive
# Specifies the working directory
daemon.wd=./
-</pre>
+</pre><br />
<h2>Example </h2>
<p>So let's start the daemon with a loop interval of 10 seconds:</p>
<pre>
@@ -3778,13 +4021,13 @@ Mon Jun 13 11:29:27 2011 (PID 2838): Triggering PerlDaemonModules::ExampleModule
Mon Jun 13 11:29:27 2011 (PID 2838): ExampleModule Test 2
$ ./control stop
Stopping daemon now...
-</pre>
+</pre><br />
<p>If you want to change that property forever, either edit perldaemon.conf or do this:</p>
<pre>
$ ./control keys daemon.loopinterval=10 &gt; new.conf; mv new.conf conf/perldaemon.conf
-</pre>
+</pre><br />
<h2>HiRes event loop</h2>
-<p>PerlDaemon uses `Time::HiRes` to make sure that all the events run incorrect intervals. For each loop run, a time carry value is recorded and added to the next loop run to catch up on lost time.</p>
+<p>PerlDaemon uses <span class="inlinecode">Time::HiRes</span> to make sure that all the events run incorrect intervals. For each loop run, a time carry value is recorded and added to the next loop run to catch up on lost time.</p>
<h2>Writing your own modules</h2>
<h3>Example module</h3>
<p>This is one of the example modules you will find in the source code. It should be pretty self-explanatory if you know Perl :-).</p>
@@ -3818,7 +4061,7 @@ sub do ($) {
}
1;
-</pre>
+</pre><br />
<h3>Your own module</h3>
<p>Want to give it some better use? It's just as easy as:</p>
<pre>
@@ -3827,8 +4070,8 @@ sub do ($) {
vi YourModule.pm
cd -
./bin/perldaemon restart (or shortcurt ./control restart)
-</pre>
-<p>Now watch `./log/perldaemon.log` closely. It is a good practice to test your modules in 'foreground mode' (see above how to do that).</p>
+</pre><br />
+<p>Now watch <span class="inlinecode">./log/perldaemon.log</span> closely. It is a good practice to test your modules in 'foreground mode' (see above how to do that).</p>
<p>BTW: You can install as many modules within the same instance as desired. But they are run in sequential order (in future, they can also run in parallel using several threads or processes).</p>
<h2>May the source be with you</h2>
<p>You can find PerlDaemon (including the examples) at:</p>
@@ -3857,7 +4100,7 @@ sub do ($) {
_ / /| _| |_| | |_) | __/ | |_| | __/ (_| | | | |_| _| |_| |
(_)_/ |_| \__, | .__/ \___| \__, |\___|\__,_|_| |_(_)_| \__, |
|___/|_| |___/ |___/
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2010-05-09, last updated at 2021-05-05</i></p>
<p>Fype is an interpreted programming language created by me for learning and fun. The interpreter is written in C. It has been tested on FreeBSD and NetBSD and may also work on other Unix like operating systems such as Linux based ones. Besides learning and fun, there is no other use case of why Fype exists as many other programming languages are much faster and more powerful.</p>
<p>The Fype syntax is straightforward and uses a maximum look ahead of 1 and an effortless top-down parsing mechanism. Fype is parsing and interpreting its code simultaneously. This means that syntax errors are only detected during program runtime. </p>
@@ -3872,7 +4115,7 @@ typedef struct {
Hash *p_hash_syms; // Symbol table
char *c_basename;
} Fype;
-</pre>
+</pre><br />
<p>And here is a snippet from the primary Fype "class implementation":</p>
<pre>
Fype*
@@ -3922,7 +4165,7 @@ fype_run(int i_argc, char **pc_argv) {
return (0);
}
-</pre>
+</pre><br />
<h2>Data types</h2>
<p>Fype uses auto type conversion. However, if you want to know what's going on, you may take a look at the following basic data types:</p>
<ul>
@@ -3950,7 +4193,7 @@ say bar;
my baz;
say baz; # Will print out 0
-</pre>
+</pre><br />
<p>You may use the "defined" keyword to check if an identifier has been defined or not:</p>
<pre>
ifnot defined foo {
@@ -3963,7 +4206,7 @@ if defined foo {
put "foo is defined and has the value ";
say foo;
}
-</pre>
+</pre><br />
<h3>Synonyms</h3>
<p>Each variable can have as many synonyms as wished. A synonym is another name to access the content of a specific variable. Here is an example of how to use it:</p>
<pre>
@@ -3973,7 +4216,7 @@ foo = "bar";
# The synonym variable should now also set to "bar"
assert "bar" == bar;
-</pre>
+</pre><br />
<p>Synonyms can be used for all kind of identifiers. It's not limited to standard variables but can also be used for function and procedure names (more about functions and procedures later).</p>
<pre>
# Create a new procedure baz
@@ -3986,7 +4229,7 @@ undef baz;
# bay still has a reference of the original procedure baz
bay; # this prints aut "I am baz"
-</pre>
+</pre><br />
<p>The "syms" keyword gives you the total number of synonyms pointing to a specific value:</p>
<pre>
my foo = 1;
@@ -3998,14 +4241,14 @@ say syms baz; # Prints 2
undef baz;
say syms foo; # Prints 1
-</pre>
+</pre><br />
<h2>Statements and expressions</h2>
<p>A Fype program is a list of statements. Each keyword, expression or function call is part of a statement. Each statement is ended with a semicolon. Example:</p>
<pre>
my bar = 3, foo = 1 + 2;
say foo;
exit foo - bar;
-</pre>
+</pre><br />
<h3>Parenthesis</h3>
<p>All parenthesis for function arguments is optional. They help to make the code better readable. They also help to force the precedence of expressions.</p>
<h3>Basic expressions</h3>
@@ -4022,7 +4265,7 @@ exit foo - bar;
(integer) &lt;any&gt; &lt;&gt; &lt;any&gt;
(integer) &lt;any&gt; gt &lt;any&gt;
(integer) not &lt;any&gt;
-</pre>
+</pre><br />
<h3>Bitwise expressions</h3>
<pre>
(integer) &lt;any&gt; :&lt; &lt;any&gt;
@@ -4030,41 +4273,41 @@ exit foo - bar;
(integer) &lt;any&gt; and &lt;any&gt;
(integer) &lt;any&gt; or &lt;any&gt;
(integer) &lt;any&gt; xor &lt;any&gt;
-</pre>
+</pre><br />
<h3>Numeric expressions</h3>
<pre>
(number) neg &lt;number&gt;
-</pre>
+</pre><br />
<p>... returns the negative value of "number":</p>
<pre>
(integer) no &lt;integer&gt;
-</pre>
+</pre><br />
<p>... returns 1 if the argument is 0; otherwise, it will return 0! If no argument is given, then 0 is returned!</p>
<pre>
(integer) yes &lt;integer&gt;
-</pre>
+</pre><br />
<p>... always returns 1. The parameter is optional. Example:</p>
<pre>
# Prints out 1, because foo is not defined
if yes { say no defined foo; }
-</pre>
+</pre><br />
<h2>Control statements</h2>
<p>Control statements available in Fype:</p>
<pre>
if &lt;expression&gt; { &lt;statements&gt; }
-</pre>
+</pre><br />
<p>... runs the statements if the expression evaluates to a true value.</p>
<pre>
ifnot &lt;expression&gt; { &lt;statements&gt; }
-</pre>
+</pre><br />
<p>... runs the statements if the expression evaluates to a false value.</p>
<pre>
while &lt;expression&gt; { &lt;statements&gt; }
-</pre>
+</pre><br />
<p>... runs the statements as long as the expression evaluates to a true value.</p>
<pre>
until &lt;expression&gt; { &lt;statements&gt; }
-</pre>
+</pre><br />
<p>... runs the statements as long as the expression evaluates to a false value.</p>
<h2>Scopes</h2>
<p>A new scope starts with an { and ends with an }. An exception is a procedure, which does not use its own scope (see later in this manual). Control statements and functions support scopes. The "scope" function prints out all available symbols at the current scope. Here is a small example:</p>
@@ -4093,7 +4336,7 @@ my foo = 1;
# Prints out 0
say defined bar;
-</pre>
+</pre><br />
<p>Another example including an actual output:</p>
<pre>
./fype -e ’my global; func foo { my var4; func bar { my var2, var3; func baz { my var1; scope; } baz; } bar; } foo;’
@@ -4111,29 +4354,29 @@ SYM_FUNCTION: baz
2 level(s) up:
SYM_VARIABLE: var4 (id=00035, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
SYM_FUNCTION: bar
-</pre>
+</pre><br />
<h2>Definedness </h2>
<pre>
(integer) defined &lt;identifier&gt;
-</pre>
+</pre><br />
<p>... returns 1 if "identifier" has been defined. Returns 0 otherwise.</p>
<pre>
(integer) undef &lt;identifier&gt;
-</pre>
+</pre><br />
<p>... tries to undefine/delete the "identifier". Returns 1 if it succeeded, otherwise 0 is returned.</p>
<h2>System </h2>
<p>These are some system and interpreter specific built-in functions supported:</p>
<pre>
(void) end
-</pre>
+</pre><br />
<p>... exits the program with the exit status of 0.</p>
<pre>
(void) exit &lt;integer&gt;
-</pre>
+</pre><br />
<p>... exits the program with the specified exit status.</p>
<pre>
(integer) fork
-</pre>
+</pre><br />
<p>... forks a subprocess. It returns 0 for the child process and the PID of the child process otherwise! Example:</p>
<pre>
my pid = fork;
@@ -4145,24 +4388,24 @@ if pid {
} ifnot pid {
say "I am the child process";
}
-</pre>
+</pre><br />
<p>To execute the garbage collector do:</p>
<pre>
(integer) GC
-</pre>
+</pre><br />
<p>It returns the number of items freed! You may wonder why most of the time, it will produce a value of 0! Fype tries to free not needed memory ASAP. This may change in future versions to gain faster execution speed!</p>
<h3>I/O </h3>
<pre>
(any) put &lt;any&gt;
-</pre>
+</pre><br />
<p>... prints out the argument</p>
<pre>
(any) say &lt;any&gt;
-</pre>
+</pre><br />
<p>is the same as put, but also includes an ending newline.</p>
<pre>
(void) ln
-</pre>
+</pre><br />
<p>... just prints a new line.</p>
<h2>Procedures and functions</h2>
<h3>Procedures</h3>
@@ -4177,7 +4420,7 @@ my a = 2, b = 4;
foo; # Run the procedure. Print out "11\n"
say c; # Print out "6\n";
-</pre>
+</pre><br />
<h3>Nested procedures</h3>
<p>It's possible to define procedures inside of procedures. Since procedures don't have their own scope, nested procedures will be available to the current scope as soon as the main procedure has run the first time. You may use the "defined" keyword to check if a procedure has been defined or not.</p>
<pre>
@@ -4197,7 +4440,7 @@ proc foo {
foo; # Here the procedure foo will define the procedure bar!
bar; # Now the procedure bar is defined!
foo; # Here the procedure foo will redefine bar again!
-</pre>
+</pre><br />
<h3>Functions</h3>
<p>A function can be defined with the "func" keyword and deleted with the "undef" keyword. Function do not yet return values and do not yet supports parameter passing. It's using local (lexical scoped) variables. If a certain variable does not exist, when It's using already defined variables (e.g. one scope above). </p>
<pre>
@@ -4210,7 +4453,7 @@ my a = 2, b = 4;
foo; # Run the procedure. Print out "11\n"
say c; # Will produce an error because c is out of scope!
-</pre>
+</pre><br />
<h3>Nested functions</h3>
<p>Nested functions work the same way the nested procedures work, except that nested functions will not be available anymore after the function has been left!</p>
<pre>
@@ -4224,14 +4467,14 @@ func foo {
foo;
bar; # Will produce an error because bar is out of scope!
-</pre>
+</pre><br />
<h2>Arrays</h2>
<p>Some progress on arrays has been made too. The following example creates a multidimensional array "foo". Its first element is the return value of the func which is "bar". The fourth value is a string" 3" converted to a double number. The last element is an anonymous array which itself contains another anonymous array as its final element:</p>
<pre>
func bar { say ”bar” }
my foo = [bar, 1, 4/2, double ”3”, [”A”, [”BA”, ”BB”]]];
say foo;
-</pre>
+</pre><br />
<p>It produces the following output:</p>
<pre>
% ./fype arrays.fy
@@ -4242,7 +4485,7 @@ bar
A
BA
BB
-</pre>
+</pre><br />
<h2>Fancy stuff</h2>
<p>Fancy stuff like OOP or Unicode or threading is not planed. But fancy stuff like function pointers and closures may be considered.:) </p>
<h2>May the source be with you</h2>
@@ -4276,7 +4519,7 @@ BB
( |---- | |
`---------------'--\\\\ .`--' -Glyde-
`||||
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2010-05-07</i></p>
<p>In contrast to Haskell, Standard SML does not use lazy evaluation by default but an eager evaluation. </p>
<a class="textlink" href="https://en.wikipedia.org/wiki/Eager_evaluation">https://en.wikipedia.org/wiki/Eager_evaluation</a><br />
@@ -4322,7 +4565,7 @@ fun nat_pairs_not_null () =
(* Test
val test = first 10 (nat_pairs_not_null ());
*)
-</pre>
+</pre><br />
<a class="textlink" href="http://smlnj.org/">http://smlnj.org/</a><br />
<h2>Real laziness with Haskell </h2>
<p>As Haskell already uses lazy evaluation by default, there is no need to construct a new data type. Lists in Haskell are lazy by default. You will notice that the code is also much shorter and easier to understand than the SML version. </p>
@@ -4346,7 +4589,7 @@ nat_pairs_not_null = filters (\[x,y] -&gt; x &gt; 0 &amp;&amp; y &gt; 0) nat_pai
{- Test:
first 10 nat_pairs_not_null
-}
-</pre>
+</pre><br />
<a class="textlink" href="http://www.haskell.org/">http://www.haskell.org/</a><br />
<p>E-Mail me your comments to paul at buetow dot org!</p>
</div>
@@ -4376,7 +4619,7 @@ datatype ’a multi
= EMPTY
| ELEM of ’a
| UNION of ’a multi * ’a multi
-</pre>
+</pre><br />
<p>Haskell:</p>
<pre>
data (Eq a) =&gt; Multi a
@@ -4384,7 +4627,7 @@ data (Eq a) =&gt; Multi a
| Elem a
| Union (Multi a) (Multi a)
deriving Show
-</pre>
+</pre><br />
<h2>Processing a multi</h2>
<p>Standard ML:</p>
<pre>
@@ -4394,7 +4637,7 @@ fun number (EMPTY) _ = 0
fun test_number w = number (UNION (EMPTY, \
UNION (ELEM 4, UNION (ELEM 6, \
UNION (UNION (ELEM 4, ELEM 4), EMPTY))))) w
-</pre>
+</pre><br />
<p>Haskell:</p>
<pre>
number Empty _ = 0
@@ -4402,7 +4645,7 @@ number (Elem x) w = if x == w then 1 else 0
test_number w = number (Union Empty \
(Union (Elem 4) (Union (Elem 6) \
(Union (Union (Elem 4) (Elem 4)) Empty)))) w
-</pre>
+</pre><br />
<h2>Simplify function</h2>
<p>Standard ML:</p>
<pre>
@@ -4419,7 +4662,7 @@ fun simplify (UNION (x,y)) =
else UNION (x’, y’)
end
| simplify x = x
-</pre>
+</pre><br />
<p>Haskell:</p>
<pre>
simplify (Union x y)
@@ -4433,7 +4676,7 @@ simplify (Union x y)
x’ = simplify x
y’ = simplify y
simplify x = x
-</pre>
+</pre><br />
<h2>Delete all</h2>
<p>Standard ML:</p>
<pre>
@@ -4443,7 +4686,7 @@ fun delete_all m w =
| delete_all’ x = x
in simplify (delete_all’ m)
end
-</pre>
+</pre><br />
<p>Haskell:</p>
<pre>
delete_all m w = simplify (delete_all’ m)
@@ -4451,7 +4694,7 @@ delete_all m w = simplify (delete_all’ m)
delete_all’ (Elem x) = if x == w then Empty else Elem x
delete_all’ (Union x y) = Union (delete_all’ x) (delete_all’ y)
delete_all’ x = x
-</pre>
+</pre><br />
<h2>Delete one</h2>
<p>Standard ML:</p>
<pre>
@@ -4470,7 +4713,7 @@ fun delete_one m w =
val (m’, _) = delete_one’ m
in simplify m’
end
-</pre>
+</pre><br />
<p>Haskell:</p>
<pre>
delete_one m w = do
@@ -4486,7 +4729,7 @@ delete_one m w = do
delete_one’ (Elem x) =
if x == w then (Empty, True) else (Elem x, False)
delete_one’ x = (x, False)
-</pre>
+</pre><br />
<h2>Higher-order functions</h2>
<p>The first line is always the SML code, the second line the Haskell variant:</p>
<pre>
@@ -4501,7 +4744,7 @@ my_map f l = foldr (make_map_fn f) [] l
fun my_filter f l = foldr (make_filter_fn f) [] l
my_filter f l = foldr (make_filter_fn f) [] l
-</pre>
+</pre><br />
<p>E-Mail me your comments to paul at buetow dot org!</p>
</div>
</content>
@@ -4534,7 +4777,7 @@ my_filter f l = foldr (make_filter_fn f) [] l
\|/ \\|| \\| |//
_jgs_\|//_\\|///_\V/_\|//__
Art by Joan Stark
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2008-12-29, last updated at 2021-12-01</i></p>
<p>The last week I was in Vidin, Bulgaria with no internet access and I had to fix my MTA (Postfix) at host.0.buetow.org which serves E-Mail for all my customers at P. B. Labs. Good, that I do not guarantee high availability on my web services (I've to do a full time job somewhere else too). </p>
<p>My first attempt to find an internet café, which was working during Christmastime, failed. However, I found with my N95 phone lots of free WLAN hotspots. The hotspots refused me logging into my server using SSH as I have configured a non-standard port for SSH for security reasons. Without knowing the costs, I used the GPRS internet access of my German phone provider (yes, I had to pay roaming fees). </p>
@@ -4586,7 +4829,7 @@ _~~|~/_|_|__/|~~~~~~~ | / ~~~~~ | | ~~~~~~~~
|| \\ _/ / | |
~ ~ ~~~ _|| (_/ (___)_| |Nov291999
(__) (____)
-</pre>
+</pre><br />
<p class="quote"><i>Published by Paul at 2008-06-26, last updated at 2021-05-04</i></p>
<p>Here are some Perl Poems I wrote. They don't do anything useful when you run them, but they don't produce a compiler error either. They only exist for fun and demonstrate what you can do with Perl syntax.</p>
<p>Wikipedia: "Perl poetry is the practice of writing poems that can be compiled as legal Perl code, for example the piece known as Black Perl. Perl poetry is made possible by the large number of English words that are used in the Perl language. New poems are regularly submitted to the community at PerlMonks."</p>
@@ -4632,7 +4875,7 @@ do { exp'onentize' and abs'olutize' };
home: //ig,'nore', time and sleep $very =~ s/tr/on/g;
__END__
-</pre>
+</pre><br />
<h2>christmas.pl</h2>
<pre>
#!/usr/bin/perl
@@ -4676,7 +4919,7 @@ END {} our $mission and do sleep until next Christmas ;}
__END__
This is perl, v5.8.8 built for i386-freebsd-64int
-</pre>
+</pre><br />
<h2>shopping.pl</h2>
<pre>
#!/usr/bin/perl
@@ -4708,7 +4951,7 @@ and sleep until unpack$ing, cool products();
__END__
This is perl, v5.8.8 built for i386-freebsd-64int
-</pre>
+</pre><br />
<h2>More...</h2>
<p>Did you like what you saw? Have a look at Codeberg to see my other poems too:</p>
<a class="textlink" href="https://codeberg.org/snonux/perl-poetry">https://codeberg.org/snonux/perl-poetry</a><br />