From 3492c68d239854c0404fe77a00460eeedd41f5cc Mon Sep 17 00:00:00 2001 From: Paul Buetow Date: Sat, 1 Apr 2023 15:55:00 +0300 Subject: Update content for html --- gemfeed/atom.xml | 1338 +++++++++++++++--------------------------------------- 1 file changed, 369 insertions(+), 969 deletions(-) (limited to 'gemfeed/atom.xml') diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml index 56b18dbc..f4fa6d89 100644 --- a/gemfeed/atom.xml +++ b/gemfeed/atom.xml @@ -1,6 +1,6 @@ - 2023-03-31T00:51:44+03:00 + 2023-04-01T15:50:57+03:00 foo.zone feed To be in the .zone! @@ -19,8 +19,7 @@

Gemtexter 2.0.0 - Let's Gemtext again^2

-Published at 2023-03-25T17:50:32+02:00
-
+

Published at 2023-03-25T17:50:32+02:00

 -=[ typewriters ]=-  1/98
 
@@ -33,22 +32,14 @@
  jgs  `"""""""""`      |o=======.|
   mod. by Paul Buetow  `"""""""""`
 
-
-I proudly announce that I've released Gemtexter version 2.0.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown written in GNU Bash.
-
+

I proudly announce that I've released Gemtexter version 2.0.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown written in GNU Bash.

https://codeberg.org/snonux/gemtexter
-
-This is a new major release, so it contains a breaking change (see "Meta cache made obsolete").
-
-Let's list what's new!
-
+

This is a new major release, so it contains a breaking change (see "Meta cache made obsolete").

+

Let's list what's new!

Minimal template engine

-Gemtexter now supports templating, enabling dynamically generated content to .gmi files before converting anything to any output format like HTML and Markdown.
-
-A template file name must have the suffix gmi.tpl. A template must be put into the same directory as the Gemtext .gmi file to be generated. Gemtexter will generate a Gemtext file index.gmi from a given template index.gmi.tpl. A <<< and >>> encloses a multiline template. All lines starting with << will be evaluated as a single line of Bash code and the output will be written into the resulting Gemtext file.
-
-For example, the template index.gmi.tpl:
-
+

Gemtexter now supports templating, enabling dynamically generated content to .gmi files before converting anything to any output format like HTML and Markdown.

+

A template file name must have the suffix gmi.tpl. A template must be put into the same directory as the Gemtext .gmi file to be generated. Gemtexter will generate a Gemtext file index.gmi from a given template index.gmi.tpl. A <<< and >>> encloses a multiline template. All lines starting with << will be evaluated as a single line of Bash code and the output will be written into the resulting Gemtext file.

+

For example, the template index.gmi.tpl:

 # Hello world
 
@@ -62,9 +53,7 @@ Welcome to this capsule!
   done
 >>>
 
-
-... results into the following index.gmi after running ./gemtexter --generate (or ./gemtexter --template, which instructs to do only template processing and nothing else):
-
+

... results into the following index.gmi after running ./gemtexter --generate (or ./gemtexter --template, which instructs to do only template processing and nothing else):

 # Hello world
 
@@ -83,9 +72,7 @@ Multiline template line 8
 Multiline template line 9
 Multiline template line 10
 
-
-Another thing you can do is insert an index with links to similar blog posts. E.g.:
-
+

Another thing you can do is insert an index with links to similar blog posts. E.g.:

 See more entries about DTail and Golang:
 
@@ -93,9 +80,7 @@ See more entries about DTail and Golang:
 
 Blablabla...
 
-
-... scans all other post entries with dtail and golang in the file name and generates a link list like this:
-
+

... scans all other post entries with dtail and golang in the file name and generates a link list like this:

 See more entries about DTail and Golang:
 
@@ -106,34 +91,25 @@ See more entries about DTail and Golang:
 
 Blablabla...
 
-

Added hooks

-You can configure PRE_GENERATE_HOOK and POST_PUBLISH_HOOK to point to scripts to be executed before running --generate, or after running --publish. E.g. you could populate some of the content by an external script before letting Gemtexter do its thing or you could automatically deploy the site after running --publish.
-
-The sample config file gemtexter.conf includes this as an example now; these scripts will only be executed when they actually exist:
-
+

You can configure PRE_GENERATE_HOOK and POST_PUBLISH_HOOK to point to scripts to be executed before running --generate, or after running --publish. E.g. you could populate some of the content by an external script before letting Gemtexter do its thing or you could automatically deploy the site after running --publish.

+

The sample config file gemtexter.conf includes this as an example now; these scripts will only be executed when they actually exist:

 declare -xr PRE_GENERATE_HOOK=./pre_generate_hook.sh
 declare -xr POST_PUBLISH_HOOK=./post_publish_hook.sh
 
-

Use of safer Bash options

-Gemtexter now does set -euf -o pipefile, which helps to eliminate bugs and to catch scripting errors sooner. Previous versions only set -e.
-
+

Gemtexter now does set -euf -o pipefile, which helps to eliminate bugs and to catch scripting errors sooner. Previous versions only set -e.

Meta cache made obsolete

-Here is the breaking change to older versions of Gemtexter. The $BASE_CONTENT_DIR/meta directory was made obsolete. meta was used to store various information about all the blog post entries to make generating an Atom feed in Bash easier. Especially the publishing dates of each post were stored there. Instead, the publishing date is now encoded in the .gmi file. And if it is missing, Gemtexter will set it to the current date and time at first run.
-
-An example blog post without any publishing date looks like this:
-
+

Here is the breaking change to older versions of Gemtexter. The $BASE_CONTENT_DIR/meta directory was made obsolete. meta was used to store various information about all the blog post entries to make generating an Atom feed in Bash easier. Especially the publishing dates of each post were stored there. Instead, the publishing date is now encoded in the .gmi file. And if it is missing, Gemtexter will set it to the current date and time at first run.

+

An example blog post without any publishing date looks like this:

 % cat gemfeed/2023-02-26-title-here.gmi
 # Title here
 
 The remaining content of the Gemtext file...
 
-
-Gemtexter will add a line starting with > Published at ... now. Any subsequent Atom feed generation will then use that date.
-
+

Gemtexter will add a line starting with > Published at ... now. Any subsequent Atom feed generation will then use that date.

 % cat gemfeed/2023-02-26-title-here.gmi
 # Title here
@@ -142,22 +118,16 @@ Gemtexter will add a line starting with  > Published
 
 The remaining content of the Gemtext file...
 
-

XMLLint support

-Optionally, when the xmllint binary is installed, Gemtexter will perform a simple XML lint check against the Atom feed generated. This is a double-check of whether the Atom feed is a valid XML.
-
+

Optionally, when the xmllint binary is installed, Gemtexter will perform a simple XML lint check against the Atom feed generated. This is a double-check of whether the Atom feed is a valid XML.

More

-Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.
-
-Other related posts are:
-
+

Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.

+

Other related posts are:

2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again^2 (You are currently reading this)
2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again
2021-06-05 Gemtexter - One Bash script to rule it all
2021-04-24 Welcome to the Geminispace
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -648,8 +618,7 @@ nmap ,j :call OpenJournalPage()<CR>

Installing DTail on OpenBSD

-Published at 2022-10-30T11:03:19+02:00
-
+

Published at 2022-10-30T11:03:19+02:00

        ,_---~~~~~----._
  _,,_,*^____      _____``*g*\"*,
@@ -686,28 +655,18 @@ nmap ,j :call OpenJournalPage()<CR>
                             /   `._____V_____V'
                                        '     '
 
-
-This will be a quick blog post, as I am busy with my personal life now. I have relocated to a different country and am still busy arranging things. So bear with me :-)
-
- In this post, I want to give a quick overview (or how-to) about installing DTail on OpenBSD, as the official documentation only covers Red Hat and Fedora Linux! And this blog post will also be used as my reference!
-
+

This will be a quick blog post, as I am busy with my personal life now. I have relocated to a different country and am still busy arranging things. So bear with me :-)

+

In this post, I want to give a quick overview (or how-to) about installing DTail on OpenBSD, as the official documentation only covers Red Hat and Fedora Linux! And this blog post will also be used as my reference!

https://dtail.dev
-
-I am using Rexify for my OpenBSD automation. Check out the following article covering my Rex setup in a little bit more detail:
-
+

I am using Rexify for my OpenBSD automation. Check out the following article covering my Rex setup in a little bit more detail:

Let's Encrypt with OpenBSD and Rex
-
-I will also mention some relevant Rexfile snippets in this post!
-
+

I will also mention some relevant Rexfile snippets in this post!

Compile it

-First of all, DTail needs to be downloaded and compiled. For that, git, go, and gmake are required:
-
+

First of all, DTail needs to be downloaded and compiled. For that, git, go, and gmake are required:

 $ doas pkg_add git go gmake
 
-
-I am happy that the Go Programming Language is readily available in the OpenBSD packaging system. Once the dependencies got installed, clone DTail and compile it:
-
+

I am happy that the Go Programming Language is readily available in the OpenBSD packaging system. Once the dependencies got installed, clone DTail and compile it:

 $ mkdir git
 $ cd git
@@ -715,43 +674,32 @@ $ git clone https://github.com/mimecast/dtail
 $ cd dtail
 $ gmake 
 
-
-You can verify the version by running the following command:
-
+

You can verify the version by running the following command:

 $ ./dtail --version
  DTail  4.1.0  Protocol 4.1  Have a lot of fun!
 $ file dtail
  dtail: ELF 64-bit LSB executable, x86-64, version 1
 
-
-Now, there isn't any need anymore to keep git, go and gmake, so they can be deinstalled now:
-
+

Now, there isn't any need anymore to keep git, go and gmake, so they can be deinstalled now:

 $ doas pkg_delete git go gmake
 
-
-One day I shall create an official OpenBSD port for DTail.
-
+

One day I shall create an official OpenBSD port for DTail.

Install it

-Installing the binaries is now just a matter of copying them to /usr/local/bin as follows:
-
+

Installing the binaries is now just a matter of copying them to /usr/local/bin as follows:

 $ for bin in dserver dcat dgrep dmap dtail dtailhealth; do
   doas cp -p $bin /usr/local/bin/$bin
   doas chown root:wheel /usr/local/bin/$bin
 done
 
-
-Also, we will be creating the _dserver service user:
-
+

Also, we will be creating the _dserver service user:

 $ doas adduser -class nologin -group _dserver -batch _dserver
 $ doas usermod -d /var/run/dserver/ _dserver
 
-
-The OpenBSD init script is created from scratch (not part of the official DTail project). Run the following to install the bespoke script:
-
+

The OpenBSD init script is created from scratch (not part of the official DTail project). Run the following to install the bespoke script:

 $ cat <<'END' | doas tee /etc/rc.d/dserver
 #!/bin/ksh
@@ -773,10 +721,8 @@ rc_cmd $1 &
 END
 $ doas chmod 755 /etc/rc.d/dserver
 
-

Rexification

-This is the task for setting it up via Rex. Note the . . . ., that's a placeholder which we will fill up more and more during this blog post:
-
+

This is the task for setting it up via Rex. Note the . . . ., that's a placeholder which we will fill up more and more during this blog post:

 desc 'Setup DTail';
 task 'dtail', group => 'frontends',
@@ -799,18 +745,14 @@ task 'dtail', group => 'frontends',
       service 'dserver', ensure => 'started';
    };
 
-

Configure it

-Now, DTail is fully installed but still needs to be configured. Grab the default config file from GitHub ...
-
+

Now, DTail is fully installed but still needs to be configured. Grab the default config file from GitHub ...

 $ doas mkdir /etc/dserver
 $ curl https://raw.githubusercontent.com/mimecast/dtail/master/samples/dtail.json.sample |
     doas tee /etc/dserver/dtail.json
 
-
-... and then edit it and adjust LogDir in the Common section to /var/log/dserver. The result will look like this:
-
+

... and then edit it and adjust LogDir in the Common section to /var/log/dserver. The result will look like this:

   "Common": {
     "LogDir": "/var/log/dserver",
@@ -821,10 +763,8 @@ $ curl https://raw.githubusercontent.com/mimecast/dtail/master/samples/dtail.jso
     "LogLevel": "Info"
   }
 
-

Rexification

-That's as simple as adding the following to the Rex task:
-
+

That's as simple as adding the following to the Rex task:

 file '/etc/dserver',
   ensure => 'directory';
@@ -836,12 +776,9 @@ file '/etc/dserver/dtail.json',
   mode => '755',
   on_change => sub { $restart = TRUE };
 
-

Update the key cache for it

-DTail relies on SSH for secure authentication and communication. However, the system user _dserver has no permission to read the SSH public keys from the user's home directories, so the DTail server also checks for available public keys in an alternative path /var/run/dserver/cache.
-
-The following script, populating the DTail server key cache, can be run periodically via CRON:
-
+

DTail relies on SSH for secure authentication and communication. However, the system user _dserver has no permission to read the SSH public keys from the user's home directories, so the DTail server also checks for available public keys in an alternative path /var/run/dserver/cache.

+

The following script, populating the DTail server key cache, can be run periodically via CRON:

 $ cat <<'END' | doas tee /usr/local/bin/dserver-update-key-cache.sh
 #!/bin/ksh
@@ -881,17 +818,13 @@ echo 'All set...'
 END
 $ doas chmod 500 /usr/local/bin/dserver-update-key-cache.sh
 
-
-Note that the script above is a slight variation of the official DTail script. The official DTail one is a bash script, but on OpenBSD, there's ksh. I run it once daily by adding it to the daily.local:
-
+

Note that the script above is a slight variation of the official DTail script. The official DTail one is a bash script, but on OpenBSD, there's ksh. I run it once daily by adding it to the daily.local:

 $ echo /usr/local/bin/dserver-update-key-cache.sh | doas tee -a /etc/daily.local
 /usr/local/bin/dserver-update-key-cache.sh
 
-

Rexification

-That's done by adding ...
-
+

That's done by adding ...

 file '/usr/local/bin/dserver-update-key-cache.sh',
   content => template('./scripts/dserver-update-key-cache.sh.tpl'),
@@ -901,12 +834,9 @@ file '/usr/local/bin/dserver-update-key-cache.sh',
 
 append_if_no_such_line '/etc/daily.local', '/usr/local/bin/dserver-update-key-cache.sh';
 
-
-... to the Rex task!
-
+

... to the Rex task!

Start it

-Now, it's time to enable and start the DTail server:
-
+

Now, it's time to enable and start the DTail server:

 $ sudo rcctl enable dserver
 $ sudo rcctl start dserver
@@ -928,9 +858,7 @@ INFO|1022-090739|86050|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0
 .
 Ctr+C
 
-
-As we don't want to wait until tomorrow, let's populate the key cache manually:
-
+

As we don't want to wait until tomorrow, let's populate the key cache manually:

 $ doas /usr/local/bin/dserver-update-key-cache.sh
 Updating SSH key cache
@@ -942,12 +870,9 @@ Caching /home/paul/.ssh/authorized_keys -> /var/cache/dserver/paul.authorized
 Caching /home/rex/.ssh/authorized_keys -> /var/cache/dserver/rex.authorized_keys
 All set...
 
-

Use it

-The DTail server is now ready to serve connections. You can use any DTail commands, such as dtail, dgrep, dmap, dcat, dtailhealth, to do so. Checkout out all the usage examples on the official DTail page.
-
-I have installed DTail server this way on my personal OpenBSD frontends blowfish, and fishfinger, and the following command connects as user rex to both machines and greps the file /etc/fstab for the string local:
-
+

The DTail server is now ready to serve connections. You can use any DTail commands, such as dtail, dgrep, dmap, dcat, dtailhealth, to do so. Checkout out all the usage examples on the official DTail page.

+

I have installed DTail server this way on my personal OpenBSD frontends blowfish, and fishfinger, and the following command connects as user rex to both machines and greps the file /etc/fstab for the string local:

 ❯ ./dgrep -user rex -servers blowfish.buetow.org,fishfinger.buetow.org --regex local /etc/fstab
 CLIENT|earth|WARN|Encountered unknown host|{blowfish.buetow.org:2222 0xc0000a00f0 0xc0000a61e0 [blowfish.buetow.org]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9ZnF/LAk14SgqCzk38yENVTNfqibcluMTuKx1u53cKSp2xwHWzy0Ni5smFPpJDIQQljQEJl14ZdXvhhjp1kKHxJ79ubqRtIXBlC0PhlnP8Kd+mVLLHYpH9VO4rnaSfHE1kBjWkI7U6lLc6ks4flgAgGTS5Bb7pLAjwdWg794GWcnRh6kSUEQd3SftANqQLgCunDcP2Vc4KR9R78zBmEzXH/OPzl/ANgNA6wWO2OoKKy2VrjwVAab6FW15h3Lr6rYIw3KztpG+UMmEj5ReexIjXi/jUptdnUFWspvAmzIl6kwzzF8ExVyT9D75JRuHvmxXKKjyJRxqb8UnSh2JD4JN [23.88.35.144]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9ZnF/LAk14SgqCzk38yENVTNfqibcluMTuKx1u53cKSp2xwHWzy0Ni5smFPpJDIQQljQEJl14ZdXvhhjp1kKHxJ79ubqRtIXBlC0PhlnP8Kd+mVLLHYpH9VO4rnaSfHE1kBjWkI7U6lLc6ks4flgAgGTS5Bb7pLAjwdWg794GWcnRh6kSUEQd3SftANqQLgCunDcP2Vc4KR9R78zBmEzXH/OPzl/ANgNA6wWO2OoKKy2VrjwVAab6FW15h3Lr6rYIw3KztpG+UMmEj5ReexIjXi/jUptdnUFWspvAmzIl6kwzzF8ExVyT9D75JRuHvmxXKKjyJRxqb8UnSh2JD4JN 0xc0000a2180}
@@ -959,32 +884,23 @@ CLIENT|earth|INFO|Added hosts to known hosts file|/home/paul/.ssh/known_hosts
 REMOTE|blowfish|100|7|fstab|31bfd9d9a6788844.h /usr/local ffs rw,wxallowed,nodev 1 2
 REMOTE|fishfinger|100|7|fstab|093f510ec5c0f512.h /usr/local ffs rw,wxallowed,nodev 1 2
 
-
-Running it the second time, and given that you trusted the keys the first time, it won't prompt you for the host keys anymore:
-
+

Running it the second time, and given that you trusted the keys the first time, it won't prompt you for the host keys anymore:

 ❯ ./dgrep -user rex -servers blowfish.buetow.org,fishfinger.buetow.org --regex local /etc/fstab
 REMOTE|blowfish|100|7|fstab|31bfd9d9a6788844.h /usr/local ffs rw,wxallowed,nodev 1 2
 REMOTE|fishfinger|100|7|fstab|093f510ec5c0f512.h /usr/local ffs rw,wxallowed,nodev 1 2
 
-

Conclusions

-It's a bit of manual work, but it's ok on this small scale! I shall invest time in creating an official OpenBSD port, though. That would render most of the manual steps obsolete, as outlined in this post!
-
-Check out the following for more information:
-
+

It's a bit of manual work, but it's ok on this small scale! I shall invest time in creating an official OpenBSD port, though. That would render most of the manual steps obsolete, as outlined in this post!

+

Check out the following for more information:

https://dtail.dev
https://github.com/mimecast/dtail
https://www.rexify.org
-
-Other related posts are:
-
+

Other related posts are:

2022-10-30 Installing DTail on OpenBSD (You are currently reading this)
2022-03-06 The release of DTail 4.0.0
2021-04-22 DTail - The distributed log tail program
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -1077,8 +993,7 @@ jgs (________\ \

Gemtexter 1.1.0 - Let's Gemtext again

-Published at 2022-08-27T18:25:57+01:00
-
+

Published at 2022-08-27T18:25:57+01:00

 -=[ typewriter ]=-  1/98
 
@@ -1090,17 +1005,12 @@ jgs                (________\  \
       |o=======.|
  jgs  `"""""""""`
 
-
-I proudly announce that I've released Gemtexter version 1.1.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown written in GNU Bash.
-
+

I proudly announce that I've released Gemtexter version 1.1.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown written in GNU Bash.

https://codeberg.org/snonux/gemtexter
-
-It has been around a year since I released the first version 1.0.0. Although, there aren't any groundbreaking changes, there have been a couple of smaller commits and adjustments. I was quite surprised that I received a bunch of feedback and requests about Gemtexter so it means that I am not the only person in the universe actually using it.
-
+

It has been around a year since I released the first version 1.0.0. Although, there aren't any groundbreaking changes, there have been a couple of smaller commits and adjustments. I was quite surprised that I received a bunch of feedback and requests about Gemtexter so it means that I am not the only person in the universe actually using it.

What's new?

Automatic check for GNU version requirements

-Gemtexter relies on the GNU versions of the tools grep, sed and date and it also requires the Bash shell in version 5 at least. That's now done in the check_dependencies() function:
-
+

Gemtexter relies on the GNU versions of the tools grep, sed and date and it also requires the Bash shell in version 5 at least. That's now done in the check_dependencies() function:

 check_dependencies () {
     # At least, Bash 5 is required
@@ -1120,48 +1030,33 @@ check_dependencies () {
     done
 }
 
-
-Especially macOS users didn't read the README carefully enough to install GNU Grep, GNU Sed and GNU Date before using Gemtexter.
-
+

Especially macOS users didn't read the README carefully enough to install GNU Grep, GNU Sed and GNU Date before using Gemtexter.

Backticks now produce inline code blocks in the HTML output

-The Gemtext format doesn't support inline code blocks, but Gemtexter now produces inline code blocks (means, small code fragments can be placed in the middle of a paragraph) in the HTML output when the code block is enclosed with Backticks. There were no adjustments required for the Markdown output format, because Markdown supports it already out of the box.
-
+

The Gemtext format doesn't support inline code blocks, but Gemtexter now produces inline code blocks (means, small code fragments can be placed in the middle of a paragraph) in the HTML output when the code block is enclosed with Backticks. There were no adjustments required for the Markdown output format, because Markdown supports it already out of the box.

Cache for Atom feed generation

-The Bash is not the most performant language. Gemtexter already takes a couple of seconds only to generate the Atom feed for around two hand full of articles on my slightly underpowered Surface Go 2 Linux tablet. Therefore, I introduced a cache, so that subsequent Atom feed generation runs finish much quicker. The cache uses a checksum of the Gemtext .gmi file to decide whether anything of the content has changed or not.
-
+

The Bash is not the most performant language. Gemtexter already takes a couple of seconds only to generate the Atom feed for around two hand full of articles on my slightly underpowered Surface Go 2 Linux tablet. Therefore, I introduced a cache, so that subsequent Atom feed generation runs finish much quicker. The cache uses a checksum of the Gemtext .gmi file to decide whether anything of the content has changed or not.

Input filter support

-Once your capsule reaches a certain size, it can become annoying to re-generate everything if you only want to preview the HTML or Markdown output of one single content file. The following will add a filter to only generate the files matching a regular expression:
-
+

Once your capsule reaches a certain size, it can become annoying to re-generate everything if you only want to preview the HTML or Markdown output of one single content file. The following will add a filter to only generate the files matching a regular expression:

 ./gemtexter --generate '.*hello.*'
 
-

Revamped git support

-The Git support has been completely rewritten. It's now more reliable and faster too. Have a look at the README for more information.
-
+

The Git support has been completely rewritten. It's now more reliable and faster too. Have a look at the README for more information.

Addition of htmlextras and web font support

-The htmlextras folder now contains all extra files required for the HTML output format such as cascading style sheet (CSS) files and web fonts.
-
+

The htmlextras folder now contains all extra files required for the HTML output format such as cascading style sheet (CSS) files and web fonts.

Sub-section support

-It's now possible to define sub-sections within a Gemtexter capsule. For the HTML output, each sub-section can use its own CSS and web font definitions. E.g.:
-
+

It's now possible to define sub-sections within a Gemtexter capsule. For the HTML output, each sub-section can use its own CSS and web font definitions. E.g.:

The foo.zone main site
The notes sub-section (with different fonts)
-

More

-Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.
-
-Overall I think it's a pretty solid 1.1.0 release without anything groundbreaking (therefore no major version jump). But I am happy about it.
-
-Other related posts are:
-
+

Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.

+

Overall I think it's a pretty solid 1.1.0 release without anything groundbreaking (therefore no major version jump). But I am happy about it.

+

Other related posts are:

2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again^2
2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again (You are currently reading this)
2021-06-05 Gemtexter - One Bash script to rule it all
2021-04-24 Welcome to the Geminispace
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -2029,65 +1924,41 @@ v = 008 [v = p*c*(s != c ? 2 : 1)] Total logical CPUs

Perl is still a great choice

-Published at 2022-05-27T07:50:12+01:00; Updated at 2023-01-28
-
+

Published at 2022-05-27T07:50:12+01:00; Updated at 2023-01-28

Comic source: XKCD
-
-Perl (the Practical Extraction and Report Language) is a battle-tested, mature, multi-paradigm dynamic programming language. Note that it's not called PERL, neither P.E.R.L. nor Pearl. "Perl" is the name of the language and perl the name of the interpreter or the interpreter command.
-
-Unfortunately (it makes me sad), Perl's popularity has been declining over the last years as Google trends shows:
-
+

Perl (the Practical Extraction and Report Language) is a battle-tested, mature, multi-paradigm dynamic programming language. Note that it's not called PERL, neither P.E.R.L. nor Pearl. "Perl" is the name of the language and perl the name of the interpreter or the interpreter command.

+

Unfortunately (it makes me sad), Perl's popularity has been declining over the last years as Google trends shows:


-
-So why is that? Once the de-facto standard super-glue language for the web nowadays seems to have a bad reputation. Often, people state:
-
+

So why is that? Once the de-facto standard super-glue language for the web nowadays seems to have a bad reputation. Often, people state:

  • Perl is a write-only language. Nobody can read Perl code.
  • Perl? Isn't it abandoned? It's still at version 5!
  • Why use Perl as there are better alternatives?
  • Why all the sigils? It looks like an exploding ASCII factory!!
-

Write-only language

-Is Perl really a write-only language? You have to understand that Perl 5 was released in 1994 (28 years ago as of this writing) and when we refer to Perl we usually mean Perl 5. That's many years, and there are many old scripts not following the modern Perl best practices (as they didn't exist yet). So yes, legacy scripts may be difficult to read. Japanese may be difficult to read too if you don't know Japanese, though.
-
-To come back to the question: Is Perl a write-only language? I don't think so. Like in any other language, you have to apply best practices in order to keep your code maintainable. Some other programming languages enforce best practices, but that makes these languages less expressive. Perl follows the principles "there is more than one way to do it" (aka TIMTOWDI) and "making easy things easy and hard things possible".
-
-Perl gives the programmer more flexibility in how to do things, and this results in a stronger learning curve than for lesser expressive languages like for example Go or Python. But, like in everything in life, common sense has to be applied. You should not take TIMTOWDI to the extreme in a production piece of code. In my personal opinion, it is also more satisfying to program in an expressive language.
-
-Some good books on "good" Perl I can recommend are:
-
+

Is Perl really a write-only language? You have to understand that Perl 5 was released in 1994 (28 years ago as of this writing) and when we refer to Perl we usually mean Perl 5. That's many years, and there are many old scripts not following the modern Perl best practices (as they didn't exist yet). So yes, legacy scripts may be difficult to read. Japanese may be difficult to read too if you don't know Japanese, though.

+

To come back to the question: Is Perl a write-only language? I don't think so. Like in any other language, you have to apply best practices in order to keep your code maintainable. Some other programming languages enforce best practices, but that makes these languages less expressive. Perl follows the principles "there is more than one way to do it" (aka TIMTOWDI) and "making easy things easy and hard things possible".

+

Perl gives the programmer more flexibility in how to do things, and this results in a stronger learning curve than for lesser expressive languages like for example Go or Python. But, like in everything in life, common sense has to be applied. You should not take TIMTOWDI to the extreme in a production piece of code. In my personal opinion, it is also more satisfying to program in an expressive language.

+

Some good books on "good" Perl I can recommend are:

Modern Perl
Higher Order Perl
-
-Due to Perl's expressiveness you will find a lot of obscure code in the interweb in form of obfuscation, fancy email signatures (JAPHs), art, polyglots and even poetry in Perl syntax. But that's not what you will find in production code. That's only people having fun with the language which is different to "getting things done". The expressiveness is a bonus. It makes the Perl programmers love Perl.
-
+

Due to Perl's expressiveness you will find a lot of obscure code in the interweb in form of obfuscation, fancy email signatures (JAPHs), art, polyglots and even poetry in Perl syntax. But that's not what you will find in production code. That's only people having fun with the language which is different to "getting things done". The expressiveness is a bonus. It makes the Perl programmers love Perl.

JAPH
http://www.cpan.org/misc/japh
Perl Poetry
-
-Even I personally have written some poetry in Perl and experimented with a polyglot script:
-
+

Even I personally have written some poetry in Perl and experimented with a polyglot script:

My very own Perl Poetry
A Perl-Raku-C polyglot generating the Fibonacci sequence
-
-This all doesn't mean that you can't "get things done" with Perl. Quite the opposite is the case. Perl is a very pragmatic programming language and is suitable very well for rapid prototyping and any kind of small to medium-sized scripts and programs. You can write large enterprise scale application in Perl too, but that wasn't the original intend of why Perl was invented (more on that later).
-
+

This all doesn't mean that you can't "get things done" with Perl. Quite the opposite is the case. Perl is a very pragmatic programming language and is suitable very well for rapid prototyping and any kind of small to medium-sized scripts and programs. You can write large enterprise scale application in Perl too, but that wasn't the original intend of why Perl was invented (more on that later).

Is Perl abandoned?

-As I pointed out in the previous section, Perl 5 is around for quite some time without any new major version released. This can lead to the impression that development is not progressing and that the project is abandoned. Nothing can be further from the truth. Perl 5.000 was released in 1994 and the latest version (as of this writing) Perl 5.34.1 was released two months ago in 2022. You can check the version history on Wikipedia. You will notice releases being made regularly:
-
+

As I pointed out in the previous section, Perl 5 is around for quite some time without any new major version released. This can lead to the impression that development is not progressing and that the project is abandoned. Nothing can be further from the truth. Perl 5.000 was released in 1994 and the latest version (as of this writing) Perl 5.34.1 was released two months ago in 2022. You can check the version history on Wikipedia. You will notice releases being made regularly:

Perl 5 version history
-
-As you can see, Perl 5 is under active development. I can also recommend to have a look at the following book, it summarizes all new Perl features which showed up after Perl v5.10:
-
+

As you can see, Perl 5 is under active development. I can also recommend to have a look at the following book, it summarizes all new Perl features which showed up after Perl v5.10:

Perl New Features by Joshua McAdams and brian d foy
-
-Actually, Perl is a family of two high-level, general-purpose, interpreted, dynamic programming languages. "Perl" refers to Perl 5, but from 2000 to 2019 it also referred to its redesigned "sister language", Perl 6, before the latter's name was officially changed to Raku in October 2019 as the differences between Perl 5 and Perl 6 were too groundbreaking. Raku would be a different topic (mostly out of scope of this blog article) but I at least wanted it to mention here. In my opinion, Raku is the "most powerful" programming language out there (I recently started learning it and intend to use it for some of my future personal programming projects):
-
+

Actually, Perl is a family of two high-level, general-purpose, interpreted, dynamic programming languages. "Perl" refers to Perl 5, but from 2000 to 2019 it also referred to its redesigned "sister language", Perl 6, before the latter's name was officially changed to Raku in October 2019 as the differences between Perl 5 and Perl 6 were too groundbreaking. Raku would be a different topic (mostly out of scope of this blog article) but I at least wanted it to mention here. In my opinion, Raku is the "most powerful" programming language out there (I recently started learning it and intend to use it for some of my future personal programming projects):

The Raku Programming Language
-
-So it means that Perl and Raku now exist in parallel. They influence each other, but are different programming languages now. So why not just all use Raku instead of Perl? There are still a couple of reasons of why to choose Perl over Raku:
-
+

So it means that Perl and Raku now exist in parallel. They influence each other, but are different programming languages now. So why not just all use Raku instead of Perl? There are still a couple of reasons of why to choose Perl over Raku:

  • Many programmers already know Perl and many scripts are already written in Perl. It's possible to call Perl code from Raku (either inline or as a library) and it is also possible to auto-convert Perl code into Raku code, but that's either a workaround or involves some kind of additional work.
  • Perl 5 comes with a great backwards compatibility. Perl scripts from 5.000 will generally still work on a recent version of Perl. New features usually have to be enabled via a so-called "use pragmas". For example, in order to enable sub signatures, use signatures; has to be specified.
  • @@ -2096,27 +1967,18 @@ So it means that Perl and Raku now exist in parallel. They influence each other,
  • Perl is reliable. It has been proven itself "millions" of times, over and over again. Large enterprises, such as booking.com, heavily rely on Perl. Did you know that the package manager of the OpenBSD operating system is programmed in Perl, too?
  • Perl is a great language to program in (given that you follow the modern best practices). Don't get confused when Perl is doing some things differently than other programming languages.
-
Perl feature pragmas
The OpenBSD Operating System
Why does OpenBSD still include Perl in its base installation?
-
-The renaming of Perl 6 to Raku has now opened the door for a future Perl 7. As far as I understand, Perl 7 will be Perl 5 but with modern features enabled by default (e.g. pragmas use strict;, use warnings;, use signatures; and so on. Also, the hope is that a Perl 7 with modern standards will attract more beginners. There aren't many Perl jobs out there nowadays. That's mostly due to Perl's bad (bad for no real reasons) reputation.
-
-Update 2022-12-10: A reader pointed out, that use v5.36; already turns strict, warnings and signatures pragmas automatically on!
-
+

The renaming of Perl 6 to Raku has now opened the door for a future Perl 7. As far as I understand, Perl 7 will be Perl 5 but with modern features enabled by default (e.g. pragmas use strict;, use warnings;, use signatures; and so on. Also, the hope is that a Perl 7 with modern standards will attract more beginners. There aren't many Perl jobs out there nowadays. That's mostly due to Perl's bad (bad for no real reasons) reputation.

+

Update 2022-12-10: A reader pointed out, that use v5.36; already turns strict, warnings and signatures pragmas automatically on!

Announcing Perl 7
What happened to Perl 7? (maybe have to use use v7;)
-
-Update 2022-12-10: A reader pointed out, that Perl 7 needs to provide a big improvement to earn and keep the attention for a major version bump.
-
-Update 2023-01-28: Meanwhile, I was also reading brian d foy's Perl New Feature book. It nicely presents all new features added to Perl since v5.10.
-
+

Update 2022-12-10: A reader pointed out, that Perl 7 needs to provide a big improvement to earn and keep the attention for a major version bump.

+

Update 2023-01-28: Meanwhile, I was also reading brian d foy's Perl New Feature book. It nicely presents all new features added to Perl since v5.10.

Perl New Features
-

Why use Perl as there are better alternatives?

-Here, common sense must be applied. I don't believe there is anything like "the perfect" programming language. Everyone has got his preferred (or a set of preferred) programming language to chose from. All programming languages come with their own set of strengths and weaknesses. These are the strengths making Perl shine, and you (technically) don't need to bother to look for "better" alternatives:
-
+

Here, common sense must be applied. I don't believe there is anything like "the perfect" programming language. Everyone has got his preferred (or a set of preferred) programming language to chose from. All programming languages come with their own set of strengths and weaknesses. These are the strengths making Perl shine, and you (technically) don't need to bother to look for "better" alternatives:

  • Perl is better than Shell/AWK/SED scripts. There's a point where shell scripts become fairly complex. The next step-up is to switch to Perl. There are many different versions of shells and AWK and SED interpreters. Do you always know which versions (mawk, nawk, gawk, sed, gsed, grep, ggrep...) are currently installed? These commands aren't fully compatible to each other. However, there is only one Perl 5. Simply: Perl is faster, more powerful, more expressive than any shell script can ever be, and it is also extendible through CPAN. Perl can directly talk to databases, which shell scripts can't.
  • Perl code tends to be compact so that it's much better suitable for "shell scripting" and quick "one-liners" than other languages. In my own experience: Ruby and Python code tends to blow up quickly. It doesn't mean that Ruby and Python are not suitable for this task, but I think Perl does much better.
  • @@ -2126,31 +1988,22 @@ Here, common sense must be applied. I don't believe there is anything like "the
  • Perl is a "deep" language. That means Perl got a lot of features and syntactic sugar and magic. Depending on the perspective, this could be interpreted as a downside too. But IMHO mastery of a "deep" language brings big rewards. The code can be very compact, and it is fun to code in it.
  • Perl is the only language I know which can do "taint checking". Running a script in taint mode makes Perl sanitize all external input and that's a great security feature. Ruby used to have this feature too, but it got removed (as I understand there were some problems with the implementation not completely safe and it was easier just to remove it from the language than to fix it).
-
-About the first point, using Perl for better "shell" scripts was actually the original intend of why Perl was invented in the first place.
-
+

About the first point, using Perl for better "shell" scripts was actually the original intend of why Perl was invented in the first place.

Perl one-liners
Mastering Regular Expressions
Taint checking
-
-Here are some reasons why not to chose Perl and look for "better" alternatives:
-
+

Here are some reasons why not to chose Perl and look for "better" alternatives:

  • If performance is your main objectives, then Perl might not be the language to use. Perl is a dynamic interpreted language, and it will generally never be as fast as statically typed languages compiled to native binaries (e.g. C/C++/Rust/Haskell) or statically typed languages run in a VM with JIT (e.g. Java) or gradually typed languages run in a VM (e.g. Raku) or languages like Golang (statically typed, compiled to a binary but still with a runtime in the binary). Perl might be still faster than the other language listed here in certain circumstances (e.g. faster startup time than Java or faster regular expressions engine), but usually it's not. It's not a problem of Perl, it's a problem of all dynamic scripting languages including Python, Ruby, ....
  • Don't use Perl (just yet) if you want to code object-oriented. Perl supports OOP, but it feels clunky and odd to use (blessed references to any data types are objects) and doesn't support real encapsulation out of the box. There are many (many) extensions available on CPAN to make OOP better, but that's totally fragmented. The most popular extension, Moose, comes with a huge dependency tree. But wait for Perl 7. It will maybe come with a new object system (an object system inspired by Raku).
  • It's possible to write large programs in Perl (make difficult things possible), but it might not be the best choice here. This also leads back to the clunky object system Perl has. You could write your projects in a procedural or functional style (Perl perfectly fits here), but OOP seems to be the gold standard for large projects nowadays. Functional programming requires a different mindset, and pure procedural programming lacks abstractions.
  • Apply common sense. What is the skill set your team has? What's already widely used and supported at work? Which languages comes with the best modules for the things you want to work on? Maybe Python is the answer (better machine learning modules). Maybe Perl is the better choice (better Bioinformatic modules). Perhaps Ruby is already the de-facto standard at work and everyone knows at least a little Ruby (as it happened to be at my workplace) and Ruby is "good enough" for all the tasks already. But that's not a hindrance to throw in a Perl one-liner once in a while :P.
-
Cor - Bringing modern OOP to the Perl Core
-

Why all the sigils? It looks like an exploding ASCII factory!!

-The sigils $ @ % & (where Perl is famously known for) serve a purpose. They seem confusing at first, but they actually make the code better readable. $scalar is a scalar variable (holding a single value), @array is an array (holding a list of values), %hash holds a list of key-value pairs and &sub is for subroutines. A given variable $ref can also hold reference to something. @$arrayref dereferences a reference to an array, %$hashref to a hash, $$scalarref to a scalar, &$subref dereferences a referene to a subroutine, etc. That can be encapsulated as deep as you want. (This paragraph only scratched the surface here of what Perl can do, and there is a lot of syntactic sugar not mentioned here).
-
-In most other programming languages, you won't know instantly what's the "basic type" of a given variable without looking at the variable declaration or the variable name (If named intelligently, e.g. a variable name containing a list of cats is cat_list). Even Ruby makes some use of sigils (@, @@ and $), but that's for a different purpose than in Perl (in Ruby it is about object scope, class scope and global scope). Raku uses all the sigils Perl uses plus an additional bunch of twigils, e.g. $.foo for a scalar object variable with public accessors, $!foo for a private scalar object variable, @.foo, @!foo, %.foo, %!foo and so on. Sigils (and twigils) are very convenient once you get used to them. Don't let them scare you off - they are there to help you!
-
+

The sigils $ @ % & (where Perl is famously known for) serve a purpose. They seem confusing at first, but they actually make the code better readable. $scalar is a scalar variable (holding a single value), @array is an array (holding a list of values), %hash holds a list of key-value pairs and &sub is for subroutines. A given variable $ref can also hold reference to something. @$arrayref dereferences a reference to an array, %$hashref to a hash, $$scalarref to a scalar, &$subref dereferences a referene to a subroutine, etc. That can be encapsulated as deep as you want. (This paragraph only scratched the surface here of what Perl can do, and there is a lot of syntactic sugar not mentioned here).

+

In most other programming languages, you won't know instantly what's the "basic type" of a given variable without looking at the variable declaration or the variable name (If named intelligently, e.g. a variable name containing a list of cats is cat_list). Even Ruby makes some use of sigils (@, @@ and $), but that's for a different purpose than in Perl (in Ruby it is about object scope, class scope and global scope). Raku uses all the sigils Perl uses plus an additional bunch of twigils, e.g. $.foo for a scalar object variable with public accessors, $!foo for a private scalar object variable, @.foo, @!foo, %.foo, %!foo and so on. Sigils (and twigils) are very convenient once you get used to them. Don't let them scare you off - they are there to help you!

https://www.perl.com/article/on-sigils/
-

Where do I personally still use perl?

  • I use Rexify for my OpenBSD server automation. Rexify is a configuration management system developed in Perl with similar features to Ansible but less bloated. It suits my personal needs perfectly.
  • @@ -2159,23 +2012,15 @@ In most other programming languages, you won't know instantly what's the "basic
  • I aim to leave my OpenBSD servers as "vanilla" as possible (trying to rely only on the standard/base installation without installing additional software from the packaging system or ports tree). All my scripts are written either Bourne shell or in Perl here. So there is no need to install additional interpreters.
  • Here and there, I drop a Perl one-liner in order to get stuff done (work and personally). A wise Perl Monk would say: "One one-liner a day keeps the troubles away".
-
-Btw.: Did you know that the first version of PHP was a set of Perl snippets? Only later, PHP became an independent programming language.
-
+

Btw.: Did you know that the first version of PHP was a set of Perl snippets? Only later, PHP became an independent programming language.

https://www.perl.org
-
-Update 2022-12-17: The following is another related post. I don't agree to the statement made there, that Python code tends to be shorter than Perl code, though!
-
+

Update 2022-12-17: The following is another related post. I don't agree to the statement made there, that Python code tends to be shorter than Perl code, though!

Why Perl is still relevant in 2022
-
-Other related posts are:
-
+

Other related posts are:

2022-05-27 Perl is still a great choice (You are currently reading this)
2011-05-07 Perl Daemon (Service Framework)
2008-06-26 Perl Poetry
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -2301,8 +2146,7 @@ learn () {

The release of DTail 4.0.0

-Published at 2022-03-06T18:11:39+00:00
-
+

Published at 2022-03-06T18:11:39+00:00

                               ,_---~~~~~----._
                         _,,_,*^____      _____``*g*\"*,
@@ -2316,19 +2160,13 @@ learn () {
                         |                            |
                          |                           |
 
-
-I have recently released DTail 4.0.0 and this blog post goes through all the new goodies. If you want to jump directly to DTail, do it here (there are nice animated gifs which demonstrates the usage pretty well):
-
+

I have recently released DTail 4.0.0 and this blog post goes through all the new goodies. If you want to jump directly to DTail, do it here (there are nice animated gifs which demonstrates the usage pretty well):

https://dtail.dev
-

So, what's new in 4.0.0?

Rewritten logging

-For DTail 4, logging has been completely rewritten. The new package name is "internal/io/dlog". I rewrote the logging because DTail is a special case here: There are logs processed by DTail, there are logs produced by the DTail server itself, there are logs produced by a DTail client itself, there are logs only logged by a DTail client, there are logs only logged by the DTail server, and there are logs logged by both, server and client. There are also different logging levels and outputs involved.
-
-As you can imagine, it becomes fairly complex. There is no ready Go off-shelf logging library which suits my needs and the logging code in DTail 3 was just one big source code file with global variables and it wasn't sustainable to maintain anymore. So why not rewrite it for profit and fun?
-
-There's a are new log level structure now (The log level now can be specified with the "-logLevel" command line flag):
-
+

For DTail 4, logging has been completely rewritten. The new package name is "internal/io/dlog". I rewrote the logging because DTail is a special case here: There are logs processed by DTail, there are logs produced by the DTail server itself, there are logs produced by a DTail client itself, there are logs only logged by a DTail client, there are logs only logged by the DTail server, and there are logs logged by both, server and client. There are also different logging levels and outputs involved.

+

As you can imagine, it becomes fairly complex. There is no ready Go off-shelf logging library which suits my needs and the logging code in DTail 3 was just one big source code file with global variables and it wasn't sustainable to maintain anymore. So why not rewrite it for profit and fun?

+

There's a are new log level structure now (The log level now can be specified with the "-logLevel" command line flag):

 // Available log levels.
 const (
@@ -2345,14 +2183,10 @@ const (
 	All     level = iota
 )
 
-
-DTail also supports multiple log outputs (e.g. to file or to stdout). More are now easily pluggable with the new logging package. The output can also be "enriched" (default) or "plain" (read more about that further below).
-
+

DTail also supports multiple log outputs (e.g. to file or to stdout). More are now easily pluggable with the new logging package. The output can also be "enriched" (default) or "plain" (read more about that further below).

Configurable terminal color codes

-A complaint I received from the users of DTail 3 were the terminal colors used for the output. Under some circumstances (terminal configuration) it made the output difficult to read so that users defaulted to "--noColor" (disabling colored output completely). I toke it by heart and also rewrote the color handling. It's now possible to configure the foreground and background colors and an attribute (e.g. dim, bold, ...).
-
-The example "dtail.json" configuration file represents the default (now, more reasonable default) color codes used, and it is free to the user to customize them:
-
+

A complaint I received from the users of DTail 3 were the terminal colors used for the output. Under some circumstances (terminal configuration) it made the output difficult to read so that users defaulted to "--noColor" (disabling colored output completely). I toke it by heart and also rewrote the color handling. It's now possible to configure the foreground and background colors and an attribute (e.g. dim, bold, ...).

+

The example "dtail.json" configuration file represents the default (now, more reasonable default) color codes used, and it is free to the user to customize them:

 {
   "Client": {
@@ -2447,9 +2281,7 @@ The example "dtail.json" configuration file represents the default (now, more re
   ...
 }
 
-
-You notice the different sections - these are different contexts:
-
+

You notice the different sections - these are different contexts:

  • Remote: Color configuration for all log lines sent remotely from the server to the client.
  • Client: Color configuration for all lines produced by a DTail client by itself (e.g. status information).
  • @@ -2457,96 +2289,68 @@ You notice the different sections - these are different contexts:
  • MaprTable: Color configuration for the map-reduce table output.
  • Common: Common color configuration used in various places (e.g. when it's not clear what's the current context of a line).
-
-When you do so, make sure that you check your "dtail.json" against the JSON schema file. This is to ensure that you don't configure an invalid color accidentally (requires "jsonschema" to be installed on your computer). Furthermore, the schema file is also a good reference for all possible colors available:
-
+

When you do so, make sure that you check your "dtail.json" against the JSON schema file. This is to ensure that you don't configure an invalid color accidentally (requires "jsonschema" to be installed on your computer). Furthermore, the schema file is also a good reference for all possible colors available:

 jsonschema -i dtail.json schemas/dtail.schema.json
 
-

Serverless mode

-All DTail commands can now operate on log files (and other text files) directly without any DTail server running. So there isn't a need anymore to install a DTail server when you are on the target server already anyway, like the following example shows:
-
+

All DTail commands can now operate on log files (and other text files) directly without any DTail server running. So there isn't a need anymore to install a DTail server when you are on the target server already anyway, like the following example shows:

 % dtail --files /var/log/foo.log
 
-
-or
-
+

or

 % dmap --files /var/log/foo.log --query 'from TABLE select .... outfile result.csv'
 
-
-The way it works in Go code is that a connection to a server is managed through an interface and in serverless mode DTail calls through that interface to the server code directly without any TCP/IP and SSH connection made in the background. This means, that the binaries are a bit larger (also ship with the code which normally would be executed by the server) but the increase of binary size is not much.
-
+

The way it works in Go code is that a connection to a server is managed through an interface and in serverless mode DTail calls through that interface to the server code directly without any TCP/IP and SSH connection made in the background. This means, that the binaries are a bit larger (also ship with the code which normally would be executed by the server) but the increase of binary size is not much.

Shorthand flags

-The "--files" from the previous example is now redundant. As a shorthand, It is now possible to do the following instead:
-
+

The "--files" from the previous example is now redundant. As a shorthand, It is now possible to do the following instead:

 % dtail /var/log/foo.log
 
-
-Of course, this also works with all other DTail client commands (dgrep, dcat, ... etc).
-
+

Of course, this also works with all other DTail client commands (dgrep, dcat, ... etc).

Spartan (aka plain) mode

-There's a plain mode, which makes DTail only print out the "plain" text of the files operated on (without any DTail specific enriched output). E.g.:
-
+

There's a plain mode, which makes DTail only print out the "plain" text of the files operated on (without any DTail specific enriched output). E.g.:

 % dcat --plain /etc/passwd > /etc/test
 % diff /etc/test /etc/passwd  # Same content, no diff
 
-
-This might be useful if you wanted to post-process the output.
-
+

This might be useful if you wanted to post-process the output.

Standard input pipe

-In serverless mode, you might want to process your data in a pipeline. You can do that now too through an input pipe:
-
+

In serverless mode, you might want to process your data in a pipeline. You can do that now too through an input pipe:

 % dgrep --plain --regex 'somethingspecial' /var/log/foo.log |
     dmap --query 'from TABLE select .... outfile result.csv'
 
-
-Or, use any other "standard" tool:
-
+

Or, use any other "standard" tool:

 % awk '.....' < /some/file | dtail ....
 
-

New command dtailhealth

-Prior to DTail 4, there was a flag for the "dtail" command to check the health of a remote DTail server (for use with monitoring system such as Nagios). That has been moved out to a separate binary to reduce complexity of the "dtail" command. The following checks whether DTail is operational on the current machine (you could also check a remote instance of DTail server, just adjust the server address).
-
+

Prior to DTail 4, there was a flag for the "dtail" command to check the health of a remote DTail server (for use with monitoring system such as Nagios). That has been moved out to a separate binary to reduce complexity of the "dtail" command. The following checks whether DTail is operational on the current machine (you could also check a remote instance of DTail server, just adjust the server address).

 % cat check_dtail.sh
 #!/bin/sh
 
 exec /usr/local/bin/dtailhealth --server localhost:2222
 
-

Improved documentation

-Some features, such as custom log formats and the map-reduce query language, are now documented. Also, the examples have been updated to reflect the new features added. This also includes the new animated example Gifs (plus documentation how they were created).
-
-I must admit that not all features are documented yet:
-
+

Some features, such as custom log formats and the map-reduce query language, are now documented. Also, the examples have been updated to reflect the new features added. This also includes the new animated example Gifs (plus documentation how they were created).

+

I must admit that not all features are documented yet:

  • Server side scheduled map-reduce queries
  • Server side continuous map-reduce queries
  • Some more docs about terminal color customization
  • Some more docs about log levels
-
-That will be added in one of the future releases.
-
+

That will be added in one of the future releases.

Integration testing suite

-DTail comes already with some unit tests, but what's new is a full integration testing suite which covers all common use cases of all the commands (dtail, dcat, dgrep, dmap) with a server backend and also in serverless mode.
-
-How are the tests implemented? All integration tests are simply unit tests in the "./integrationtests" folder. They must be explicitly activated with:
-
+

DTail comes already with some unit tests, but what's new is a full integration testing suite which covers all common use cases of all the commands (dtail, dcat, dgrep, dmap) with a server backend and also in serverless mode.

+

How are the tests implemented? All integration tests are simply unit tests in the "./integrationtests" folder. They must be explicitly activated with:

 % export DTAIL_INTEGRATION_TEST_RUN_MODE=yes
 
-
-Once done, first compile all commands, and then run the integration tests:
-
+

Once done, first compile all commands, and then run the integration tests:

 % make
 .
@@ -2555,45 +2359,31 @@ Once done, first compile all commands, and then run the integration tests:
% go clean -testcache % go test -race -v ./integrationtests
-

Improved code

-Not that the code quality of DTail has been bad (I have been using Go vet and Go lint for previous releases and will keep using these), but this time I had new tools (such as SonarQube and BlackDuck) in my arsenal to:
-
+

Not that the code quality of DTail has been bad (I have been using Go vet and Go lint for previous releases and will keep using these), but this time I had new tools (such as SonarQube and BlackDuck) in my arsenal to:

  • Reduce the complexity of a couple of functions (splitting code up into several smaller functions)
  • Avoid repeating code (this version of DTail doesn't use Go generics yet, though).
-
-Other than that, a lot of other code has been refactored as I saw fit.
-
+

Other than that, a lot of other code has been refactored as I saw fit.

Use of memory pools

-DTail makes excessive use of string builder and byte buffer objects. For performance reasons, those are now re-used from memory pools.
-
+

DTail makes excessive use of string builder and byte buffer objects. For performance reasons, those are now re-used from memory pools.

What's next

-DTail 5 won't be released any time soon I guess, but some 4.x.y releases will follow this year fore sure. I can think of:
-
+

DTail 5 won't be released any time soon I guess, but some 4.x.y releases will follow this year fore sure. I can think of:

  • New (but backwards compatible) features which don't require a new major version bump (some features have been requested at work internally).
  • Even more improved documentation.
  • Dependency updates.
-
-I use usually DTail at work, but I have recently installed it on my personal OpenBSD machines too. I might write a small tutorial here (and I might also add the rc scripts as examples to one of the next DTail releases).
-
-I am a bit busy at the moment with two other pet projects of mine (one internal work-project, and one personal one, the latter you will read about in the next couple of months). If you have ideas (or even a patch), then please don't hesitate to contact me (either via E-Mail or a request at GitHub).
-
-Other related posts are:
-
+

I use usually DTail at work, but I have recently installed it on my personal OpenBSD machines too. I might write a small tutorial here (and I might also add the rc scripts as examples to one of the next DTail releases).

+

I am a bit busy at the moment with two other pet projects of mine (one internal work-project, and one personal one, the latter you will read about in the next couple of months). If you have ideas (or even a patch), then please don't hesitate to contact me (either via E-Mail or a request at GitHub).

+

Other related posts are:

2022-10-30 Installing DTail on OpenBSD
2022-03-06 The release of DTail 4.0.0 (You are currently reading this)
2021-04-22 DTail - The distributed log tail program
-
-Thanks!
-
-Paul
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

Thanks!

+

Paul

+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -2826,8 +2616,7 @@ GNU/kFreeBSD rhea.buetow.org 8.0-RELEASE-p5 FreeBSD 8.0-RELEASE-p5 #2: Sat Nov 2

Bash Golf Part 2

-Published at 2022-01-01T23:36:15+00:00; Updated at 2022-01-05
-
+

Published at 2022-01-01T23:36:15+00:00; Updated at 2022-01-05

 
     '\       '\                   .  .                |>18>>
@@ -2839,23 +2628,17 @@ GNU/kFreeBSD rhea.buetow.org 8.0-RELEASE-p5 FreeBSD 8.0-RELEASE-p5 #2: Sat Nov 2
 jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                         Art by Joan Stark, mod. by Paul Buetow
 
-
-This is the second blog post about my Bash Golf series. This series is random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.
-
+

This is the second blog post about my Bash Golf series. This series is random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.

2022-01-01 Bash Golf Part 2 (You are currently reading this)
2021-11-29 Bash Golf Part 1
-

Redirection

-Let's have a closer look at Bash redirection. As you might already know that there are 3 standard file descriptors:
-
+

Let's have a closer look at Bash redirection. As you might already know that there are 3 standard file descriptors:

  • 0 aka stdin (standard input)
  • 1 aka stdout (standard output)
  • 2 aka stderr (standard error output)
-
-These are most certainly the ones you are using on regular basis. "/proc/self/fd" lists all file descriptors which are open by the current process (in this case: the current Bash shell itself):
-
+

These are most certainly the ones you are using on regular basis. "/proc/self/fd" lists all file descriptors which are open by the current process (in this case: the current Bash shell itself):

 ❯ ls -l /proc/self/fd/
 total 0
@@ -2864,45 +2647,32 @@ lrwx------. 1 paul paul 64 Nov 23 09:46 1 -> /dev/pts/9
 lrwx------. 1 paul paul 64 Nov 23 09:46 2 -> /dev/pts/9
 lr-x------. 1 paul paul 64 Nov 23 09:46 3 -> /proc/162912/fd
 
-
-The following examples demonstrate two different ways to accomplish the same thing. The difference is that the first command is directly printing out "Foo" to stdout and the second command is explicitly redirecting stdout to its own stdout file descriptor:
-
+

The following examples demonstrate two different ways to accomplish the same thing. The difference is that the first command is directly printing out "Foo" to stdout and the second command is explicitly redirecting stdout to its own stdout file descriptor:

 ❯ echo Foo
 Foo
 ❯ echo Foo > /proc/self/fd/0
 Foo
 
-
-Other useful redirections are:
-
+

Other useful redirections are:

  • Redirect stderr to stdin: "echo foo 2>&1"
  • Redirect stdin to stderr: "echo foo >&2"
-
-It is, however, not possible to redirect multiple times within the same command. E.g. the following won't work. You would expect stdin to be redirected to stderr and then stderr to be redirected to /dev/null. But as the example shows, Foo is still printed out:
-
+

It is, however, not possible to redirect multiple times within the same command. E.g. the following won't work. You would expect stdin to be redirected to stderr and then stderr to be redirected to /dev/null. But as the example shows, Foo is still printed out:

 ❯ echo Foo 1>&2 2>/dev/null
 Foo
 
-
-Update: A reader sent me an email and pointed out that the order of the redirections is important.
-
-As you can see, the following will not print out anything:
-
+

Update: A reader sent me an email and pointed out that the order of the redirections is important.

+

As you can see, the following will not print out anything:

 ❯ echo Foo 2>/dev/null 1>&2
 ❯
 
-
-A good description (also pointed out by the reader) can be found here:
-
+

A good description (also pointed out by the reader) can be found here:

Order of redirection
-
-Ok, back to the original blog post. You can also use grouping here (neither of these commands will print out anything to stdout):
-
+

Ok, back to the original blog post. You can also use grouping here (neither of these commands will print out anything to stdout):

 ❯ { echo Foo 1>&2; } 2>/dev/null
 ❯ ( echo Foo 1>&2; ) 2>/dev/null
@@ -2910,9 +2680,7 @@ Ok, back to the original blog post. You can also use grouping here (neither of t
 ❯ ( ( ( echo Foo 1>&2; ) 2>&1; ) 1>&2; ) 2>/dev/null
 ❯
 
-
-A handy way to list all open file descriptors is to use the "lsof" command (that's not a Bash built-in), whereas $$ is the process id (pid) of the current shell process:
-
+

A handy way to list all open file descriptors is to use the "lsof" command (that's not a Bash built-in), whereas $$ is the process id (pid) of the current shell process:

 ❯ lsof -a -p $$ -d0,1,2
 COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
@@ -2920,9 +2688,7 @@ bash    62676 paul    0u   CHR  136,9      0t0   12 /dev/pts/9
 bash    62676 paul    1u   CHR  136,9      0t0   12 /dev/pts/9
 bash    62676 paul    2u   CHR  136,9      0t0   12 /dev/pts/9
 
-
-Let's create our own descriptor "3" for redirection to a file named "foo":
-
+

Let's create our own descriptor "3" for redirection to a file named "foo":

 ❯ touch foo
 ❯ exec 3>foo # This opens fd 3 and binds it to file foo.
@@ -2937,9 +2703,7 @@ Bratwurst
 ❯ echo Steak >&3
 -bash: 3: Bad file descriptor
 
-
-You can also override the default file descriptors, as the following example script demonstrates:
-
+

You can also override the default file descriptors, as the following example script demonstrates:

 ❯ cat grandmaster.sh
 #!/usr/bin/env bash
@@ -2966,19 +2730,15 @@ echo Second line: $LINE2
 # Restore default stdin and delete fd 6
 exec 0<&6 6<&-
 
-
-Let's execute it:
-
+

Let's execute it:

 ❯ chmod 750 ./grandmaster.sh
 ❯ ./grandmaster.sh
 First line: Learn You a Haskell
 Second line: for Great Good
 
-

HERE

-I have mentioned HERE-documents and HERE-strings already in this post. Let's do some more examples. The following "cat" receives a multi line string from stdin. In this case, the input multi line string is a HERE-document. As you can see, it also interpolates variables (in this case the output of "date" running in a subshell).
-
+

I have mentioned HERE-documents and HERE-strings already in this post. Let's do some more examples. The following "cat" receives a multi line string from stdin. In this case, the input multi line string is a HERE-document. As you can see, it also interpolates variables (in this case the output of "date" running in a subshell).

 ❯ cat <<END
 > Hello World
@@ -2987,9 +2747,7 @@ I have mentioned HERE-documents and HERE-strings already in this post. Let's do
 Hello World
 It's Fri 26 Nov 08:46:52 GMT 2021
 
-
-You can also write it the following way, but that's less readable (it's good for an obfuscation contest):
-
+

You can also write it the following way, but that's less readable (it's good for an obfuscation contest):

 ❯ <<END cat
 > Hello Universe
@@ -2998,9 +2756,7 @@ You can also write it the following way, but that's less readable (it's good for
 Hello Universe
 It's Fri 26 Nov 08:47:32 GMT 2021
 
-
-Besides of an HERE-document, there is also a so-called HERE-string. Besides of...
-
+

Besides of an HERE-document, there is also a so-called HERE-string. Besides of...

 ❯ declare VAR=foo
 ❯ if echo "$VAR" | grep -q foo; then
@@ -3008,32 +2764,24 @@ Besides of an HERE-document, there is also a so-called HERE-string. Besides of..
 > fi
 $VAR ontains foo
 
-
-...you can use a HERE-string like that:
-
+

...you can use a HERE-string like that:

 ❯ if grep -q foo <<< "$VAR"; then
 > echo '$VAR contains foo'
 > fi
 $VAR contains foo
 
-
-Or even shorter, you can do:
-
+

Or even shorter, you can do:

 ❯ grep -q foo <<< "$VAR" && echo '$VAR contains foo'
 $VAR contains foo
 
-
-You can also use a Bash regex to accomplish the same thing, but the points of the examples so far were to demonstrate HERE-{documents,strings} and not Bash regular expressions:
-
+

You can also use a Bash regex to accomplish the same thing, but the points of the examples so far were to demonstrate HERE-{documents,strings} and not Bash regular expressions:

 ❯ if [[ "$VAR" =~ foo ]]; then echo yay; fi
 yay
 
-
-You can also use it with "read":
-
+

You can also use it with "read":

 ❯ read a <<< ja
 ❯ echo $a
@@ -3048,19 +2796,15 @@ Learn
 ❯ echo ${words[3]}
 Golang
 
-
-The following is good for an obfuscation contest too:
-
+

The following is good for an obfuscation contest too:

 ❯ echo 'I like Perl too' > perllove.txt
 ❯ cat - perllove.txt <<< "$dumdidumstring"
 Learn you a Golang for Great Good
 I like Perl too
 
-

RANDOM

-Random is a special built-in variable containing a different pseudo random number each time it's used.
-
+

Random is a special built-in variable containing a different pseudo random number each time it's used.

 ❯ echo $RANDOM
 11811
@@ -3069,11 +2813,8 @@ Random is a special built-in variable containing a different pseudo random numbe
 ❯ echo $RANDOM
 9104
 
-
-That's very useful if you want to randomly delay the execution of your scripts when you run it on many servers concurrently, just to spread the server load (which might be caused by the script run) better.
-
-Let's say you want to introduce a random delay of 1 minute. You can accomplish it with:
-
+

That's very useful if you want to randomly delay the execution of your scripts when you run it on many servers concurrently, just to spread the server load (which might be caused by the script run) better.

+

Let's say you want to introduce a random delay of 1 minute. You can accomplish it with:

 ❯ cat ./calc_answer_to_ultimate_question_in_life.sh
 #!/usr/bin/env bash
@@ -3101,13 +2842,10 @@ main
 Delaying script execution for 42 seconds...
 Continuing script execution...
 
-

set -x and set -e and pipefile

-In my opinion, -x and -e and pipefile are the most useful Bash options. Let's have a look at them one after another.
-
+

In my opinion, -x and -e and pipefile are the most useful Bash options. Let's have a look at them one after another.

-x

--x prints commands and their arguments as they are executed. This helps to develop and debug your Bash code:
-
+

-x prints commands and their arguments as they are executed. This helps to develop and debug your Bash code:

 ❯ set -x
 ❯ square () { local -i num=$1; echo $((num*num)); }
@@ -3119,15 +2857,11 @@ In my opinion, -x and -e and pipefile are the most useful Bash options. Let's ha
 + echo 'Square of 11 is 121'
 Square of 11 is 121
 
-
-You can also set it when calling an external script without modifying the script itself:
-
+

You can also set it when calling an external script without modifying the script itself:

 ❯ bash -x ./half_broken_script_to_be_debugged.sh
 
-
-Let's do that on one of the example scripts we covered earlier:
-
+

Let's do that on one of the example scripts we covered earlier:

 ❯ bash -x ./grandmaster.sh
 + bash -x ./grandmaster.sh
@@ -3145,28 +2879,21 @@ Second line: for Great Good
 + exec
 ❯
 
-

-e

-This is a very important option you want to use when you are paranoid. This means, you should always "set -e" in your scripts when you need to make absolutely sure that your script runs successfully (with that I mean that no command should exit with an unexpected status code).
-
-Ok, let's dig deeper:
-
+

This is a very important option you want to use when you are paranoid. This means, you should always "set -e" in your scripts when you need to make absolutely sure that your script runs successfully (with that I mean that no command should exit with an unexpected status code).

+

Ok, let's dig deeper:

 ❯ help set | grep -- -e
       -e  Exit immediately if a command exits with a non-zero status.
 
-
-As you can see in the following example, the Bash terminates after the execution of "grep" as "foo" is not matching "bar". Therefore, grep exits with 1 (unsuccessfully) and the shell aborts. And therefore, "bar" will not be printed out anymore:
-
+

As you can see in the following example, the Bash terminates after the execution of "grep" as "foo" is not matching "bar". Therefore, grep exits with 1 (unsuccessfully) and the shell aborts. And therefore, "bar" will not be printed out anymore:

 ❯ bash -c 'set -e; echo hello; grep -q bar <<< foo; echo bar'
 hello
 ❯ echo $?
 1
 
-
-Whereas the outcome changes when the regex matches:
-
+

Whereas the outcome changes when the regex matches:

 ❯ bash -c 'set -e; echo hello; grep -q bar <<< barman; echo bar'
 hello
@@ -3174,9 +2901,7 @@ bar
 ❯ echo $?
 0
 
-
-So does it mean that grep will always make the shell terminate whenever its exit code isn't 0? This will render "set -e" quite unusable. Frankly, there are other commands where an exit status other than 0 should not terminate the whole script abruptly. Usually, what you want is to branch your code based on the outcome (exit code) of a command:
-
+

So does it mean that grep will always make the shell terminate whenever its exit code isn't 0? This will render "set -e" quite unusable. Frankly, there are other commands where an exit status other than 0 should not terminate the whole script abruptly. Usually, what you want is to branch your code based on the outcome (exit code) of a command:

 ❯ bash -c 'set -e
 >    grep -q bar <<< foo
@@ -3188,11 +2913,8 @@ So does it mean that grep will always make the shell terminate whenever its exit
 ❯ echo $?
 1
 
-
-...but the example above won't reach any of the branches and won't print out anything, as the script terminates right after grep.
-
-The proper solution is to use grep as an expression in a conditional (e.g. in an if-else statement):
-
+

...but the example above won't reach any of the branches and won't print out anything, as the script terminates right after grep.

+

The proper solution is to use grep as an expression in a conditional (e.g. in an if-else statement):

 ❯ bash -c 'set -e
 >    if grep -q bar <<< foo; then
@@ -3213,9 +2935,7 @@ matching
 ❯ echo $?
 0
 
-
-You can also temporally undo "set -e" if there is no other way:
-
+

You can also temporally undo "set -e" if there is no other way:

 ❯ cat ./e.sh
 #!/usr/bin/env bash
@@ -3257,34 +2977,25 @@ Hello World
 Hello Universe
 Hello You!
 
-
-Why does calling "foo" with no arguments make the script terminate? Because as no argument was given, the "shift" won't have anything to do as the argument list $@ is empty, and therefore "shift" fails with a non-zero status.
-
-Why would you want to use "shift" after function-local variable assignments? Have a look at my personal Bash coding style guide for an explanation :-):
-
+

Why does calling "foo" with no arguments make the script terminate? Because as no argument was given, the "shift" won't have anything to do as the argument list $@ is empty, and therefore "shift" fails with a non-zero status.

+

Why would you want to use "shift" after function-local variable assignments? Have a look at my personal Bash coding style guide for an explanation :-):

./2021-05-16-personal-bash-coding-style-guide.html
-

pipefail

-The pipefail option makes it so that not only the exit code of the last command of the pipe counts regards its exit code but any command of the pipe:
-
+

The pipefail option makes it so that not only the exit code of the last command of the pipe counts regards its exit code but any command of the pipe:

 ❯ help set | grep pipefail -A 2
     pipefail     the return value of a pipeline is the status of
                  the last command to exit with a non-zero status,
                  or zero if no command exited with a non-zero status
 
-
-The following greps for paul in passwd and converts all lowercase letters to uppercase letters. The exit code of the pipe is 0, as the last command of the pipe (converting from lowercase to uppercase) succeeded:
-
+

The following greps for paul in passwd and converts all lowercase letters to uppercase letters. The exit code of the pipe is 0, as the last command of the pipe (converting from lowercase to uppercase) succeeded:

 ❯ grep paul /etc/passwd | tr '[a-z]' '[A-Z]'
 PAUL:X:1000:1000:PAUL BUETOW:/HOME/PAUL:/BIN/BASH
 ❯ echo $?
 0
 
-
-Let's look at another example, where "TheRock" doesn't exist in the passwd file. However, the pipes exit status is still 0 (success). This is so because the last command ("tr" in this case) still succeeded. It is just that it didn't get any input on stdin to process:
-
+

Let's look at another example, where "TheRock" doesn't exist in the passwd file. However, the pipes exit status is still 0 (success). This is so because the last command ("tr" in this case) still succeeded. It is just that it didn't get any input on stdin to process:

 ❯ grep TheRock /etc/passwd
 ❯ echo $?
@@ -3293,25 +3004,19 @@ Let's look at another example, where "TheRock" doesn't exist in the passwd file.
 ❯ echo $?
 0
 
-
-To change this behaviour, pipefile can be used. Now, the pipes exit status is 1 (fail), because the pipe contains at least one command (in this case grep) which exited with status 1:
-
+

To change this behaviour, pipefile can be used. Now, the pipes exit status is 1 (fail), because the pipe contains at least one command (in this case grep) which exited with status 1:

 ❯ set -o pipefail
 ❯ grep TheRock /etc/passwd | tr '[a-z]' '[A-Z]'
 ❯ echo $?
 1
 
-
-Other related posts are:
-
+

Other related posts are:

2022-01-01 Bash Golf Part 2 (You are currently reading this)
2021-11-29 Bash Golf Part 1
2021-06-05 Gemtexter - One Bash script to rule it all
2021-05-16 Personal Bash coding style guide
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -3422,8 +3127,7 @@ E-Mail your comments to hi@paul.cyou :-)

Bash Golf Part 1

-Published at 2021-11-29T14:06:14+00:00; Updated at 2022-01-05
-
+

Published at 2021-11-29T14:06:14+00:00; Updated at 2022-01-05

 
      '\                   .  .                        |>18>>
@@ -3435,27 +3139,19 @@ E-Mail your comments to hi@paul.cyou :-)
jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Art by Joan Stark
-
-This is the first blog post about my Bash Golf series. This series is about random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.
-
+

This is the first blog post about my Bash Golf series. This series is about random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.

2022-01-01 Bash Golf Part 2
2021-11-29 Bash Golf Part 1 (You are currently reading this)
-

TCP/IP networking

-You probably know the Netcat tool, which is a swiss army knife for TCP/IP networking on the command line. But did you know that the Bash natively supports TCP/IP networking?
-
-Have a look here how that works:
-
+

You probably know the Netcat tool, which is a swiss army knife for TCP/IP networking on the command line. But did you know that the Bash natively supports TCP/IP networking?

+

Have a look here how that works:

 ❯ cat < /dev/tcp/time.nist.gov/13
 
 59536 21-11-18 08:09:16 00 0 0 153.6 UTC(NIST) *
 
-
-The Bash treats /dev/tcp/HOST/PORT in a special way so that it is actually establishing a TCP connection to HOST:PORT. The example above redirects the TCP output of the time-server to cat and cat is printing it on standard output (stdout).
-
-A more sophisticated example is firing up an HTTP request. Let's create a new read-write (rw) file descriptor (fd) 5, redirect the HTTP request string to it, and then read the response back:
-
+

The Bash treats /dev/tcp/HOST/PORT in a special way so that it is actually establishing a TCP connection to HOST:PORT. The example above redirects the TCP output of the time-server to cat and cat is printing it on standard output (stdout).

+

A more sophisticated example is firing up an HTTP request. Let's create a new read-write (rw) file descriptor (fd) 5, redirect the HTTP request string to it, and then read the response back:

 ❯ exec 5<>/dev/tcp/google.de/80
 ❯ echo -e "GET / HTTP/1.1\nhost: google.de\n\n" >&5
@@ -3471,12 +3167,9 @@ Content-Length: 218
 X-XSS-Protection: 0
 X-Frame-Options: SAMEORIGIN
 
-
-You would assume that this also works with the ZSH, but it doesn't. This is one of the few things which don't work with the ZSH but in the Bash. There might be plugins you could use for ZSH to do something similar, though.
-
+

You would assume that this also works with the ZSH, but it doesn't. This is one of the few things which don't work with the ZSH but in the Bash. There might be plugins you could use for ZSH to do something similar, though.

Process substitution

-The idea here is, that you can read the output (stdout) of a command from a file descriptor:
-
+

The idea here is, that you can read the output (stdout) of a command from a file descriptor:

 ❯ uptime # Without process substitution
  10:58:03 up 4 days, 22:08,  1 user,  load average: 0.16, 0.34, 0.41
@@ -3495,11 +3188,8 @@ Modify: 2021-11-20 10:59:31.482411961 +0000
 Change: 2021-11-20 10:59:31.482411961 +0000
  Birth: -
 
-
-This example doesn't make any sense practically speaking, but it clearly demonstrates how process substitution works. The standard output pipe of "uptime" is redirected to an anonymous file descriptor. That fd then is opened by the "cat" command as a regular file.
-
-A useful use case is displaying the differences of two sorted files:
-
+

This example doesn't make any sense practically speaking, but it clearly demonstrates how process substitution works. The standard output pipe of "uptime" is redirected to an anonymous file descriptor. That fd then is opened by the "cat" command as a regular file.

+

A useful use case is displaying the differences of two sorted files:

 ❯ echo a > /tmp/file-a.txt
 ❯ echo b >> /tmp/file-a.txt
@@ -3520,15 +3210,11 @@ A useful use case is displaying the differences of two sorted files:
❯ diff -u <(sort /tmp/file-a.txt) <(sort /tmp/file-b.txt) ❯
-
-Another example is displaying the differences of two directories:
-
+

Another example is displaying the differences of two directories:

 ❯ diff -u <(ls ./dir1/ | sort) <(ls ./dir2/ | sort)
 
-
-More (Bash golfing) examples:
-
+

More (Bash golfing) examples:

 ❯ wc -l <(ls /tmp/) /etc/passwd <(env)
      24 /dev/fd/63
@@ -3543,28 +3229,21 @@ More (Bash golfing) examples:
foo bar baz ❯
-
-So far, we only used process substitution for stdout redirection. But it also works for stdin. The following two commands result into the same outcome, but the second one is writing the tar data stream to an anonymous file descriptor which is substituted by the "bzip2" command reading the data stream from stdin and compressing it to its own stdout, which then gets redirected to a file:
-
+

So far, we only used process substitution for stdout redirection. But it also works for stdin. The following two commands result into the same outcome, but the second one is writing the tar data stream to an anonymous file descriptor which is substituted by the "bzip2" command reading the data stream from stdin and compressing it to its own stdout, which then gets redirected to a file:

 ❯ tar cjf file.tar.bz2 foo
 ❯ tar cjf >(bzip2 -c > file.tar.bz2) foo
 
-
-Just think a while and see whether you understand fully what is happening here.
-
+

Just think a while and see whether you understand fully what is happening here.

Grouping

-Command grouping can be quite useful for combining the output of multiple commands:
-
+

Command grouping can be quite useful for combining the output of multiple commands:

 ❯ { ls /tmp; cat /etc/passwd; env; } | wc -l
 97
 ❯ ( ls /tmp; cat /etc/passwd; env; ) | wc -l
 97
 
-
-But wait, what is the difference between curly braces and normal braces? I assumed that the normal braces create a subprocess whereas the curly ones don't, but I was wrong:
-
+

But wait, what is the difference between curly braces and normal braces? I assumed that the normal braces create a subprocess whereas the curly ones don't, but I was wrong:

 ❯ echo $$
 62676
@@ -3573,9 +3252,7 @@ But wait, what is the difference between curly braces and normal braces? I assum
 ❯ ( echo $$; )
 62676
 
-
-One difference is, that the curly braces require you to end the last statement with a semicolon, whereas with the normal braces you can omit the last semicolon:
-
+

One difference is, that the curly braces require you to end the last statement with a semicolon, whereas with the normal braces you can omit the last semicolon:

 ❯ ( env; ls ) | wc -l
 27
@@ -3583,11 +3260,8 @@ One difference is, that the curly braces require you to end the last statement w
 >
 > ^C
 
-
-In case you know more (subtle) differences, please write me an E-Mail and let me know.
-
-Update: A reader sent me an E-Mail and pointed me to the Bash manual page, which explains the difference between () and {} (I should have checked that by myself):
-
+

In case you know more (subtle) differences, please write me an E-Mail and let me know.

+

Update: A reader sent me an E-Mail and pointed me to the Bash manual page, which explains the difference between () and {} (I should have checked that by myself):

 (list) list is executed in a subshell environment (see COMMAND EXECUTION ENVIRONMENT
        below).   Variable  assignments  and builtin commands that affect the shell's
@@ -3602,26 +3276,20 @@ In case you know more (subtle) differences, please write me an E-Mail and let me
        is  permitted  to  be recognized.  Since they do not cause a word break, they
        must be separated from list by whitespace or another shell metacharacter.
 
-
-So I was right that () is executed in a subprocess. But why does $$ not show a different PID? Also here (as pointed out by the reader) is the answer in the manual page:
-
+

So I was right that () is executed in a subprocess. But why does $$ not show a different PID? Also here (as pointed out by the reader) is the answer in the manual page:

 $      Expands to the process ID of the shell.  In a () subshell, it expands to  the
        process ID of the current shell, not the subshell.
 
-
-If we want print the subprocess PID, we can use the BASHPID variable:
-
+

If we want print the subprocess PID, we can use the BASHPID variable:

 ❯ echo $BASHPID; { echo $BASHPID; }; ( echo $BASHPID; )
 1028465
 1028465
 1028739
 
-

Expansions

-Let's start with simple examples:
-
+

Let's start with simple examples:

 ❯ echo {0..5}
 0 1 2 3 4 5
@@ -3633,9 +3301,7 @@ Let's start with simple examples:
4 5
-
-You can also add leading 0 or expand to any number range:
-
+

You can also add leading 0 or expand to any number range:

 ❯ echo {00..05}
 00 01 02 03 04 05
@@ -3644,40 +3310,30 @@ You can also add leading 0 or expand to any number range:
❯ echo {201..205} 201 202 203 204 205
-
-It also works with letters:
-
+

It also works with letters:

 ❯ echo {a..e}
 a b c d e
 
-
-Now it gets interesting. The following takes a list of words and expands it so that all words are quoted:
-
+

Now it gets interesting. The following takes a list of words and expands it so that all words are quoted:

 ❯ echo \"{These,words,are,quoted}\"
 "These" "words" "are" "quoted"
 
-
-Let's also expand to the cross product of two given lists:
-
+

Let's also expand to the cross product of two given lists:

 ❯ echo {one,two}\:{A,B,C}
 one:A one:B one:C two:A two:B two:C
 ❯ echo \"{one,two}\:{A,B,C}\"
 "one:A" "one:B" "one:C" "two:A" "two:B" "two:C"
 
-
-Just because we can:
-
+

Just because we can:

 ❯ echo Linux-{one,two,three}\:{A,B,C}-FreeBSD
 Linux-one:A-FreeBSD Linux-one:B-FreeBSD Linux-one:C-FreeBSD Linux-two:A-FreeBSD Linux-two:B-FreeBSD Linux-two:C-FreeBSD Linux-three:A-FreeBSD Linux-three:B-FreeBSD Linux-three:C-FreeBSD
 
-

- aka stdin and stdout placeholder

-Some commands and Bash builtins use "-" as a placeholder for stdin and stdout:
-
+

Some commands and Bash builtins use "-" as a placeholder for stdin and stdout:

 ❯ echo Hello world
 Hello world
@@ -3690,32 +3346,24 @@ Hello world
 ❯ cat - <<< 'Hello world'
 Hello world
 
-
-Let's walk through all three examples from the above snippet:
-
+

Let's walk through all three examples from the above snippet:

  • The first example is obvious (the Bash builtin "echo" prints its arguments to stdout).
  • The second pipes "Hello world" via stdout to stdin of the "cat" command. As cat's argument is "-" it reads its data from stdin and not from a regular file named "-". So "-" has a special meaning for cat.
  • The third and fourth examples are interesting as we don't use a pipe as of "|" but a so-called HERE-document and a HERE-string. But the end results are the same.
-
-The "tar" command understands "-" too. The following example tars up some local directory and sends the data to stdout (this is what "-f -" commands it to do). stdout then is piped via an SSH session to a remote tar process (running on buetow.org) and reads the data from stdin and extracts all the data coming from stdin (as we told tar with "-f -") on the remote machine:
-
+

The "tar" command understands "-" too. The following example tars up some local directory and sends the data to stdout (this is what "-f -" commands it to do). stdout then is piped via an SSH session to a remote tar process (running on buetow.org) and reads the data from stdin and extracts all the data coming from stdin (as we told tar with "-f -") on the remote machine:

 ❯ tar -czf - /some/dir | ssh hercules@buetow.org tar -xzvf - 
 
-
-This is yet another example of using "-", but this time using the "file" command:
-
+

This is yet another example of using "-", but this time using the "file" command:

 $ head -n 1 grandmaster.sh
 #!/usr/bin/env bash
 $ file - < <(head -n 1 grandmaster.sh)
 /dev/stdin: a /usr/bin/env bash script, ASCII text executable
 
-
-Some more golfing:
-
+

Some more golfing:

 $ cat -
 hello
@@ -3725,10 +3373,8 @@ $ file -
 #!/usr/bin/perl
 /dev/stdin: Perl script text executable
 
-

Alternative argument passing

-This is a quite unusual way of passing arguments to a Bash script:
-
+

This is a quite unusual way of passing arguments to a Bash script:

 ❯ cat foo.sh
 #/usr/bin/env bash
@@ -3736,9 +3382,7 @@ declare -r USER=${USER:?Missing the username}
 declare -r PASS=${PASS:?Missing the secret password for $USER}
 echo $USER:$PASS
 
-
-So what we are doing here is to pass the arguments via environment variables to the script. The script will abort with an error when there's an undefined argument.
-
+

So what we are doing here is to pass the arguments via environment variables to the script. The script will abort with an error when there's an undefined argument.

 ❯ chmod +x foo.sh
 ❯ ./foo.sh
@@ -3750,26 +3394,19 @@ So what we are doing here is to pass the arguments via environment variables to
 ❯ USER=paul PASS=secret ./foo.sh
 paul:secret
 
-
-You have probably noticed this *strange* syntax:
-
+

You have probably noticed this *strange* syntax:

 ❯ VARIABLE1=value1 VARIABLE2=value2 ./script.sh
 
-
-That's just another way to pass environment variables to a script. You can write it as well as like this:
-
+

That's just another way to pass environment variables to a script. You can write it as well as like this:

 ❯ export VARIABLE1=value1
 ❯ export VARIABLE2=value2
 ❯ ./script.sh
 
-
-But the downside of it is that the variables will also be defined in your current shell environment and not just in the scripts sub-process.
-
+

But the downside of it is that the variables will also be defined in your current shell environment and not just in the scripts sub-process.

: aka the null command

-First, let's use the "help" Bash built-in to see what it says about the null command:
-
+

First, let's use the "help" Bash built-in to see what it says about the null command:

 ❯ help :
 :: :
@@ -3780,19 +3417,14 @@ First, let's use the "help" Bash built-in to see what it says about the null com
     Exit Status:
     Always succeeds.
 
-
-PS: IMHO, people should use the Bash help more often. It is a very useful Bash reference. Too many fallbacks to a Google search and then land on Stack Overflow. Sadly, there's no help built-in for the ZSH shell though (so even when I am using the ZSH I make use of the Bash help as most of the built-ins are compatible).
-
-OK, back to the null command. What happens when you try to run it? As you can see, absolutely nothing. And its exit status is 0 (success):
-
+

PS: IMHO, people should use the Bash help more often. It is a very useful Bash reference. Too many fallbacks to a Google search and then land on Stack Overflow. Sadly, there's no help built-in for the ZSH shell though (so even when I am using the ZSH I make use of the Bash help as most of the built-ins are compatible).

+

OK, back to the null command. What happens when you try to run it? As you can see, absolutely nothing. And its exit status is 0 (success):

 ❯ :
 ❯ echo $?
 0
 
-
-Why would that be useful? You can use it as a placeholder in an endless while-loop:
-
+

Why would that be useful? You can use it as a placeholder in an endless while-loop:

 ❯ while : ; do date; sleep 1; done
 Sun 21 Nov 12:08:31 GMT 2021
@@ -3801,9 +3433,7 @@ Sun 21 Nov 12:08:33 GMT 2021
 ^C
 ❯
 
-
-You can also use it as a placeholder for a function body not yet fully implemented, as an empty function ill result in a syntax error:
-
+

You can also use it as a placeholder for a function body not yet fully implemented, as an empty function ill result in a syntax error:

 ❯ foo () {  }
 -bash: syntax error near unexpected token `}'
@@ -3811,15 +3441,11 @@ You can also use it as a placeholder for a function body not yet fully implement
 ❯ foo
 ❯
 
-
-Or use it as a placeholder for not yet implemented conditional branches:
-
+

Or use it as a placeholder for not yet implemented conditional branches:

 ❯ if foo; then :; else echo bar; fi
 
-
-Or (not recommended) as a fancy way to comment your Bash code:
-
+

Or (not recommended) as a fancy way to comment your Bash code:

 ❯ : I am a comment and have no other effect
 ❯ : I am a comment and result in a syntax error ()
@@ -3827,9 +3453,7 @@ Or (not recommended) as a fancy way to comment your Bash code:
❯ : "I am a comment and don't result in a syntax error ()" ❯
-
-As you can see in the previous example, the Bash still tries to interpret some syntax of all text following after ":". This can be exploited (also not recommended) like this:
-
+

As you can see in the previous example, the Bash still tries to interpret some syntax of all text following after ":". This can be exploited (also not recommended) like this:

 ❯ declare i=0
 ❯ $[ i = i + 1 ]
@@ -3840,9 +3464,7 @@ bash: 1: command not found...
 ❯ echo $i
 4
 
-
-For these kinds of expressions it's always better to use "let" though. And you should also use $((...expression...)) instead of the old (deprecated) way $[ ...expression... ] like this example demonstrates:
-
+

For these kinds of expressions it's always better to use "let" though. And you should also use $((...expression...)) instead of the old (deprecated) way $[ ...expression... ] like this example demonstrates:

 ❯ declare j=0
 ❯ let j=$((j + 1))
@@ -3852,10 +3474,8 @@ For these kinds of expressions it's always better to use "let" though. And you s
 ❯ echo $j
 4
 
-

(No) floating point support

-I have to give a plus-point to the ZSH here. As the ZSH supports floating point calculation, whereas the Bash doesn't:
-
+

I have to give a plus-point to the ZSH here. As the ZSH supports floating point calculation, whereas the Bash doesn't:

 ❯ bash -c 'echo $(( 1/10 ))'
 0
@@ -3867,27 +3487,19 @@ bash: line 1: 1/10.0 : syntax error: invalid arithmetic operator (error token is
 0.10000000000000001
 ❯
 
-
-It would be nice to have native floating point support for the Bash too, but you don't want to use the shell for complicated calculations anyway. So it's fine that Bash doesn't have that, I guess.
-
-In the Bash you will have to fall back to an external command like "bc" (the arbitrary precision calculator language):
-
+

It would be nice to have native floating point support for the Bash too, but you don't want to use the shell for complicated calculations anyway. So it's fine that Bash doesn't have that, I guess.

+

In the Bash you will have to fall back to an external command like "bc" (the arbitrary precision calculator language):

 ❯ bc <<< 'scale=2; 1/10'
 .10
 
-
-See you later for the next post of this series.
-
-Other related posts are:
-
+

See you later for the next post of this series.

+

Other related posts are:

2022-01-01 Bash Golf Part 2
2021-11-29 Bash Golf Part 1 (You are currently reading this)
2021-06-05 Gemtexter - One Bash script to rule it all
2021-05-16 Personal Bash coding style guide
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -4223,8 +3835,7 @@ Hello World

Gemtexter - One Bash script to rule it all

-Published at 2021-06-05T19:03:32+01:00
-
+

Published at 2021-06-05T19:03:32+01:00

                                                                o .,<>., o
                                                                |\/\/\/\/|
@@ -4265,25 +3876,16 @@ Hello World
  \___.>`''-.||:.__,'     SSt |_______`>              <_____:::.         . . \  _/
                                                            `+a:f:......jrei'''
 
-
-You might have read my previous blog posts about entering the Geminispace, where I pointed out the benefits of having and maintaining an internet presence there. This whole site (the blog and all other pages) is composed in the Gemtext markup language.
-
-This comes with the benefit that I can write content in my favourite text editor (Vim).
-
+

You might have read my previous blog posts about entering the Geminispace, where I pointed out the benefits of having and maintaining an internet presence there. This whole site (the blog and all other pages) is composed in the Gemtext markup language.

+

This comes with the benefit that I can write content in my favourite text editor (Vim).

Motivation

-Another benefit of using Gemini is that the Gemtext markup language is easy to parse. As my site is dual-hosted (Gemini+HTTP), I could, in theory, just write a shell script to deal with the conversion from Gemtext to HTML; there is no need for a full-featured programming language here. I have done a lot of Bash in the past, but I am also often revisiting old tools and techniques for refreshing and keeping the knowledge up to date here.
-
+

Another benefit of using Gemini is that the Gemtext markup language is easy to parse. As my site is dual-hosted (Gemini+HTTP), I could, in theory, just write a shell script to deal with the conversion from Gemtext to HTML; there is no need for a full-featured programming language here. I have done a lot of Bash in the past, but I am also often revisiting old tools and techniques for refreshing and keeping the knowledge up to date here.

Motivational comic strip
-
-I have exactly done that - I wrote a Bash script, named Gemtexter, for that:
-
+

I have exactly done that - I wrote a Bash script, named Gemtexter, for that:

https://codeberg.org/snonux/gemtexter
-
-In short, Gemtexter is a static site generator and blogging engine that uses Gemtext as its input format.
-
+

In short, Gemtexter is a static site generator and blogging engine that uses Gemtext as its input format.

Output formats

-Gemtexter takes the Gemntext Markup files as the input and generates the following outputs from it (you find examples for each of these output formats on the Gemtexter GitHub page):
-
+

Gemtexter takes the Gemntext Markup files as the input and generates the following outputs from it (you find examples for each of these output formats on the Gemtexter GitHub page):

  • HTML files for my website
  • Markdown files for a GitHub page
  • @@ -4291,15 +3893,11 @@ Gemtexter takes the Gemntext Markup files as the input and generates the followi
  • A Gemfeed for my blog posts (a particular feed format commonly used in Geminispace. The Gemfeed can be used as an alternative to the Atom feed).
  • An HTML Atom feed of my blog posts
-
-I could have done all of that with a more robust language than Bash (such as Perl, Ruby, Go...), but I didn't. The purpose of this exercise was to challenge what I can do with a "simple" Bash script and learn new things.
-
+

I could have done all of that with a more robust language than Bash (such as Perl, Ruby, Go...), but I didn't. The purpose of this exercise was to challenge what I can do with a "simple" Bash script and learn new things.

Taking it as far as I should, but no farther

-The Bash is suitable very well for small scripts and ad-hoc automation on the command line. But it is for sure not a robust programming language. Writing this blog post, Gemtexter is nearing 1000 lines of code, which is actually a pretty large Bash script.
-
+

The Bash is suitable very well for small scripts and ad-hoc automation on the command line. But it is for sure not a robust programming language. Writing this blog post, Gemtexter is nearing 1000 lines of code, which is actually a pretty large Bash script.

Modularization

-I modularized the code so that each core functionality has its own file in ./lib. All the modules are included from the main Gemtexter script. For example, there is one module for HTML generation, one for Markdown generation, and so on.
-
+

I modularized the code so that each core functionality has its own file in ./lib. All the modules are included from the main Gemtexter script. For example, there is one module for HTML generation, one for Markdown generation, and so on.

 paul in uranus in gemtexter on 🌱 main
 ❯ wc -l gemtexter lib/*
@@ -4314,31 +3912,19 @@ paul in uranus in gemtexter on 🌱 main
      63 lib/md.source.sh
      834 total
 
-
-This way, the script could grow far beyond 1000 lines of code and still be maintainable. With more features, execution speed may slowly become a problem, though. I already notice that Gemtexter doesn't produce results instantly but requires few seconds of runtime already. That's not a problem yet, though.
-
+

This way, the script could grow far beyond 1000 lines of code and still be maintainable. With more features, execution speed may slowly become a problem, though. I already notice that Gemtexter doesn't produce results instantly but requires few seconds of runtime already. That's not a problem yet, though.

Bash best practises and ShellCheck

-While working on Gemtexter, I also had a look at the Google Shell Style Guide and wrote a blog post on that:
-
+

While working on Gemtexter, I also had a look at the Google Shell Style Guide and wrote a blog post on that:

Personal bash coding style guide
-
-I followed all these best practices, and in my opinion, the result is a pretty maintainable Bash script (given that you are fluent with all the sed and grep commands I used).
-
-ShellCheck, a shell script analysis tool written in Haskell, is run on Gemtexter ensuring that all code is acceptable. I am pretty impressed with what ShellCheck found.
-
-It, for example, detected "some_command | while read var; do ...; done" loops and hinted that these create a new subprocess for the while part. The result is that all variable modifications taking place in the while-subprocess won't reflect the primary Bash process. ShellSheck then recommended rewriting the loop so that no subprocess is spawned as "while read -r var; do ...; done < <(some_command)". ShellCheck also pointed out to add a "-r" to "read"; otherwise, there could be an issue with backspaces in the loop data.
-
-Furthermore, ShellCheck recommended many more improvements. Declaration of unused variables and missing variable and string quotations were the most common ones. ShellSheck immensely helped to improve the robustness of the script.
-
+

I followed all these best practices, and in my opinion, the result is a pretty maintainable Bash script (given that you are fluent with all the sed and grep commands I used).

+

ShellCheck, a shell script analysis tool written in Haskell, is run on Gemtexter ensuring that all code is acceptable. I am pretty impressed with what ShellCheck found.

+

It, for example, detected "some_command | while read var; do ...; done" loops and hinted that these create a new subprocess for the while part. The result is that all variable modifications taking place in the while-subprocess won't reflect the primary Bash process. ShellSheck then recommended rewriting the loop so that no subprocess is spawned as "while read -r var; do ...; done < <(some_command)". ShellCheck also pointed out to add a "-r" to "read"; otherwise, there could be an issue with backspaces in the loop data.

+

Furthermore, ShellCheck recommended many more improvements. Declaration of unused variables and missing variable and string quotations were the most common ones. ShellSheck immensely helped to improve the robustness of the script.

https://shellcheck.net
-

Unit testing

-There is a basic unit test module in ./lib/assert.source.sh, which is used for unit testing. I found this to be very beneficial for cross-platform development. For example, I noticed that some unit tests failed on macOS while everything still worked fine on my Fedora Linux laptop.
-
-After digging a bit, I noticed that I had to install the GNU versions of the sed and grep commands on macOS and a newer version of the Bash to make all unit tests pass and Gemtexter work.
-
-It has been proven quite helpful to have unit tests in place for the HTML part already when working on the Markdown generator part. To test the Markdown part, I copied the HTML unit tests and changed the expected outcome in the assertions. This way, I could implement the Markdown generator in a test-driven way (writing the test first and afterwards the implementation).
-
+

There is a basic unit test module in ./lib/assert.source.sh, which is used for unit testing. I found this to be very beneficial for cross-platform development. For example, I noticed that some unit tests failed on macOS while everything still worked fine on my Fedora Linux laptop.

+

After digging a bit, I noticed that I had to install the GNU versions of the sed and grep commands on macOS and a newer version of the Bash to make all unit tests pass and Gemtexter work.

+

It has been proven quite helpful to have unit tests in place for the HTML part already when working on the Markdown generator part. To test the Markdown part, I copied the HTML unit tests and changed the expected outcome in the assertions. This way, I could implement the Markdown generator in a test-driven way (writing the test first and afterwards the implementation).

HTML unit test example

 gemtext='=> http://example.org Description of the link'
@@ -4346,41 +3932,31 @@ assert::equals "$(generate::make_link html "$gemtext")" \
     '<a class="textlink" href="http://example.org">Description of the link</a><br />'
 
 
-

Markdown unit test example

 gemtext='=> http://example.org Description of the link'
 assert::equals "$(generate::make_link md "$gemtext")" \
     '[Description of the link](http://example.org)  '
 
-

Handcrafted HTML styles

-I had a look at some ready off the shelf CSS styles, but they all seemed too bloated. There is a whole industry selling CSS styles on the interweb. I preferred an effortless and minimalist style for the HTML site. So I handcrafted the Cascading Style Sheets manually with love and included them in the HTML header template.
-
-For now, I have to re-generate all HTML files whenever the CSS changes. That should not be an issue now, but I might move the CSS into a separate file one day.
-
-It's worth mentioning that all generated HTML files and Atom feeds pass the W3C validation tests.
-
+

I had a look at some ready off the shelf CSS styles, but they all seemed too bloated. There is a whole industry selling CSS styles on the interweb. I preferred an effortless and minimalist style for the HTML site. So I handcrafted the Cascading Style Sheets manually with love and included them in the HTML header template.

+

For now, I have to re-generate all HTML files whenever the CSS changes. That should not be an issue now, but I might move the CSS into a separate file one day.

+

It's worth mentioning that all generated HTML files and Atom feeds pass the W3C validation tests.

+

Configurability

-In case someone else than me wants to use Gemtexter for his own site, it is pretty much configurable. It is possible to specify your own configuration file and your own HTML templates. Have a look at the GitHub page for examples.
-
+

In case someone else than me wants to use Gemtexter for his own site, it is pretty much configurable. It is possible to specify your own configuration file and your own HTML templates. Have a look at the GitHub page for examples.

Future features

-I could think of the following features added to a future version of Gemtexter:
-
+

I could think of the following features added to a future version of Gemtexter:

  • Templating of Gemtext files so that the .html files are generated from .gmi.tpl files. The template engine could do such things as an automatic table of contents and sitemap generation. It could also include the output of inlined shell code, e.g. a fortune quote.
  • Add support for more output formats, such as Groff, PDF, plain text, Gopher, etc.
  • External CSS file for HTML.
  • Improve speed by introducing parallelism and/or concurrency and/or better caching.
-

Conclusion

-It was quite a lot of fun writing Gemtexter. It's a relatively small project, but given that I worked on that in my spare time once in a while, it kept me busy for several weeks.
-
-I finally revamped my personal internet site and started to blog again. I wanted the result to be exactly how it is now: A slightly retro-inspired internet site built for fun with unconventional tools.
-
-Other related posts are:
-
+

It was quite a lot of fun writing Gemtexter. It's a relatively small project, but given that I worked on that in my spare time once in a while, it kept me busy for several weeks.

+

I finally revamped my personal internet site and started to blog again. I wanted the result to be exactly how it is now: A slightly retro-inspired internet site built for fun with unconventional tools.

+

Other related posts are:

2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again^2
2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again
2022-01-01 Bash Golf Part 2
@@ -4388,9 +3964,7 @@ Other related posts are:
2021-06-05 Gemtexter - One Bash script to rule it all (You are currently reading this)
2021-05-16 Personal Bash coding style guide
2021-04-24 Welcome to the Geminispace
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -4408,8 +3982,7 @@ E-Mail your comments to hi@paul.cyou :-)

Personal Bash coding style guide

-Published at 2021-05-16T14:51:57+01:00
-
+

Published at 2021-05-16T14:51:57+01:00

    .---------------------------.
   /,--..---..---..---..---..--. `.
@@ -4422,39 +3995,26 @@ E-Mail your comments to hi@paul.cyou :-)
// \\ // \\ |===|| hjw "\__/"---------------"\__/"-+---+'
-
-Lately, I have been polishing and writing a lot of Bash code. Not that I never wrote a lot of Bash, but now as I also looked through the Google Shell Style Guide, I thought it is time also to write my thoughts on that. I agree with that guide in most, but not in all points.
-
+

Lately, I have been polishing and writing a lot of Bash code. Not that I never wrote a lot of Bash, but now as I also looked through the Google Shell Style Guide, I thought it is time also to write my thoughts on that. I agree with that guide in most, but not in all points.

Google Shell Style Guide
-

My modifications

-These are my modifications to the Google Guide.
-
+

These are my modifications to the Google Guide.

Shebang

-Google recommends using always...
-
+

Google recommends using always...

 #!/bin/bash 
 
-
-... as the shebang line, but that does not work on all Unix and Unix-like operating systems (e.g., the *BSDs don't have Bash installed to /bin/bash). Better is:
-
+

... as the shebang line, but that does not work on all Unix and Unix-like operating systems (e.g., the *BSDs don't have Bash installed to /bin/bash). Better is:

 #!/usr/bin/env bash
 
-

Two space soft-tabs indentation

-I know there have been many tab- and soft-tab wars on this planet. Google recommends using two space soft-tabs for Bash scripts.
-
-I don't care if I use two or four space indentations. I agree, however, that we should not use tabs. I tend to use four-space soft-tabs as that's how I currently configured Vim for any programming language. What matters most, though, is consistency within the same script/project.
-
-Google also recommends limiting the line length to 80 characters. For some people, that seems to be an old habit from the '80s, where all computer terminals couldn't display longer lines. But I think that the 80 character mark is still a good practice, at least for shell scripts. For example, I am often writing code on a Microsoft Go Tablet PC (running Linux, of course), and it comes in convenient if the lines are not too long due to the relatively small display on the device.
-
-I hit the 80 character line length quicker with the four spaces than with two spaces, but that makes me refactor the Bash code more aggressively, which is a good thing.
-
+

I know there have been many tab- and soft-tab wars on this planet. Google recommends using two space soft-tabs for Bash scripts.

+

I don't care if I use two or four space indentations. I agree, however, that we should not use tabs. I tend to use four-space soft-tabs as that's how I currently configured Vim for any programming language. What matters most, though, is consistency within the same script/project.

+

Google also recommends limiting the line length to 80 characters. For some people, that seems to be an old habit from the '80s, where all computer terminals couldn't display longer lines. But I think that the 80 character mark is still a good practice, at least for shell scripts. For example, I am often writing code on a Microsoft Go Tablet PC (running Linux, of course), and it comes in convenient if the lines are not too long due to the relatively small display on the device.

+

I hit the 80 character line length quicker with the four spaces than with two spaces, but that makes me refactor the Bash code more aggressively, which is a good thing.

Breaking long pipes

-Google recommends breaking up long pipes like this:
-
+

Google recommends breaking up long pipes like this:

 # All fits on one line
 command1 | command2
@@ -4465,9 +4025,7 @@ command1 \
   | command3 \
   | command4
 
-
-I think there is a better way like the following, which is less noisy. The pipe | already indicates the Bash that another command is expected, thus making the explicit line breaks with \ obsolete:
-
+

I think there is a better way like the following, which is less noisy. The pipe | already indicates the Bash that another command is expected, thus making the explicit line breaks with \ obsolete:

 # Long commands
 command1 |
@@ -4475,10 +4033,8 @@ command1 |
     command3 |
     command4
 
-

Quoting your variables

-Google recommends always quote your variables. Generally, it would be best if you did that only for variables where you are unsure about the content/values of the variables (e.g., content is from an external input source and may contain whitespace or other special characters). In my opinion, the code will become quite noisy when you always quote your variables like this:
-
+

Google recommends always quote your variables. Generally, it would be best if you did that only for variables where you are unsure about the content/values of the variables (e.g., content is from an external input source and may contain whitespace or other special characters). In my opinion, the code will become quite noisy when you always quote your variables like this:

 greet () {
     local -r greeting="${1}"
@@ -4486,9 +4042,7 @@ greet () {
     echo "${greeting} ${name}!"
 }
 
-
-In this particular example, I agree that you should quote them as you don't know the input (are there, for example, whitespace characters?). But if you are sure that you are only using simple bare words, then I think that the code looks much cleaner when you do this instead:
-
+

In this particular example, I agree that you should quote them as you don't know the input (are there, for example, whitespace characters?). But if you are sure that you are only using simple bare words, then I think that the code looks much cleaner when you do this instead:

 say_hello_to_paul () {
     local -r greeting=Hello
@@ -4496,20 +4050,15 @@ say_hello_to_paul () {
     echo "$greeting $name!"
 }
 
-
-You see, I also omitted the curly braces { } around the variables. I only use the curly braces around variables when it makes the code either easier/clearer to read or if it is necessary to use them:
-
+

You see, I also omitted the curly braces { } around the variables. I only use the curly braces around variables when it makes the code either easier/clearer to read or if it is necessary to use them:

 declare FOO=bar
 # Curly braces around FOO are necessary
 echo "foo${FOO}baz"
 
-
-A few more words on always quoting the variables: For the sake of consistency (and for making ShellCheck happy), I am not against quoting everything I encounter. I also think that the larger the Bash script becomes, the more critical it becomes always to quote variables. That's because it will be more likely that you might not remember that some of the functions don't work on values with spaces in them, for example. It's just that I won't quote everything in every small script I write.
-
+

A few more words on always quoting the variables: For the sake of consistency (and for making ShellCheck happy), I am not against quoting everything I encounter. I also think that the larger the Bash script becomes, the more critical it becomes always to quote variables. That's because it will be more likely that you might not remember that some of the functions don't work on values with spaces in them, for example. It's just that I won't quote everything in every small script I write.

Prefer built-in commands over external commands

-Google recommends using the built-in commands over available external commands where possible:
-
+

Google recommends using the built-in commands over available external commands where possible:

 # Prefer this:
 addition=$(( X + Y ))
@@ -4519,19 +4068,13 @@ substitution="${string/#foo/bar}"
 addition="$(expr "${X}" + "${Y}")"
 substitution="$(echo "${string}" | sed -e 's/^foo/bar/')"
 
-
-I can't entirely agree here. The external commands (especially sed) are much more sophisticated and powerful than the built-in Bash versions. Sed can do much more than the Bash can ever do by itself when it comes to text manipulation (the name "sed" stands for streaming editor, after all).
-
-I prefer to do light text processing with the Bash built-ins and more complicated text processing with external programs such as sed, grep, awk, cut, and tr. However, there is also medium-light text processing where I would want to use external programs. That is so because I remember using them better than the Bash built-ins. The Bash can get relatively obscure here (even Perl will be more readable then - Side note: I love Perl).
-
-Also, you would like to use an external command for floating-point calculation (e.g., bc) instead of using the Bash built-ins (worth noticing that ZSH supports built-in floating-points).
-
-I even didn't get started with what you can do with awk (especially GNU Awk), a fully-fledged programming language. Tiny Awk snippets tend to be used quite often in Shell scripts without honouring the real power of Awk. But if you did everything in Perl or Awk or another scripting language, then it wouldn't be a Bash script anymore, wouldn't it? ;-)
-
+

I can't entirely agree here. The external commands (especially sed) are much more sophisticated and powerful than the built-in Bash versions. Sed can do much more than the Bash can ever do by itself when it comes to text manipulation (the name "sed" stands for streaming editor, after all).

+

I prefer to do light text processing with the Bash built-ins and more complicated text processing with external programs such as sed, grep, awk, cut, and tr. However, there is also medium-light text processing where I would want to use external programs. That is so because I remember using them better than the Bash built-ins. The Bash can get relatively obscure here (even Perl will be more readable then - Side note: I love Perl).

+

Also, you would like to use an external command for floating-point calculation (e.g., bc) instead of using the Bash built-ins (worth noticing that ZSH supports built-in floating-points).

+

I even didn't get started with what you can do with awk (especially GNU Awk), a fully-fledged programming language. Tiny Awk snippets tend to be used quite often in Shell scripts without honouring the real power of Awk. But if you did everything in Perl or Awk or another scripting language, then it wouldn't be a Bash script anymore, wouldn't it? ;-)

My additions

Use of 'yes' and 'no'

-Bash does not support a boolean type. I tend just to use the strings 'yes' and 'no' here. I used 0 for false and 1 for true for some time, but I think that the yes/no strings are easier to read. Yes, the Bash script would need to perform string comparisons on every check, but if performance is crucial to you, you wouldn't want to use a Bash script anyway, correct?
-
+

Bash does not support a boolean type. I tend just to use the strings 'yes' and 'no' here. I used 0 for false and 1 for true for some time, but I think that the yes/no strings are easier to read. Yes, the Bash script would need to perform string comparisons on every check, but if performance is crucial to you, you wouldn't want to use a Bash script anyway, correct?

 declare -r SUGAR_FREE=yes
 declare -r I_NEED_THE_BUZZ=no
@@ -4548,10 +4091,8 @@ buy_soda () {
 
 buy_soda $I_NEED_THE_BUZZ
 
-

Non-evil alternative to variable assignments via eval

-Google is in the opinion that eval should be avoided. I think so too. They list these examples in their guide:
-
+

Google is in the opinion that eval should be avoided. I think so too. They list these examples in their guide:

 # What does this set?
 # Did it succeed? In part or whole?
@@ -4561,9 +4102,7 @@ eval $(set_my_variables)
 variable="$(eval some_function)"
 
 
-
-However, if I want to read variables from another file, I don't have to use eval here. I only have to source the file:
-
+

However, if I want to read variables from another file, I don't have to use eval here. I only have to source the file:

 % cat vars.source.sh
 declare foo=bar
@@ -4573,9 +4112,7 @@ declare bay=foo
 % bash -c 'source vars.source.sh; echo $foo $bar $baz'
 bar baz foo
 
-
-And suppose I want to assign variables dynamically. In that case, I could just run an external script and source its output (This is how you could do metaprogramming in Bash without the use of eval - write code which produces code for immediate execution):
-
+

And suppose I want to assign variables dynamically. In that case, I could just run an external script and source its output (This is how you could do metaprogramming in Bash without the use of eval - write code which produces code for immediate execution):

 % cat vars.sh
 #!/usr/bin/env bash
@@ -4587,12 +4124,9 @@ END
 % bash -c 'source <(./vars.sh); echo "Hello $user, it is $date"'
 Hello paul, it is Sat 15 May 19:21:12 BST 2021
 
-
-The downside is that ShellCheck won't be able to follow the dynamic sourcing anymore.
-
+

The downside is that ShellCheck won't be able to follow the dynamic sourcing anymore.

Prefer pipes over arrays for list processing

-When I do list processing in Bash, I prefer to use pipes. You can chain them through Bash functions as well, which is pretty neat. Usually, my list processing scripts are of a structure like this:
-
+

When I do list processing in Bash, I prefer to use pipes. You can chain them through Bash functions as well, which is pretty neat. Usually, my list processing scripts are of a structure like this:

 filter_lines () {
     echo 'Start filtering lines in a fancy way!' >&2
@@ -4628,14 +4162,10 @@ main () {
 
 main
 
-
-The stdout is always passed as a pipe to the next following stage. The stderr is used for info logging.
-
+

The stdout is always passed as a pipe to the next following stage. The stderr is used for info logging.

Assign-then-shift

-I often refactor existing Bash code. That leads me to add and removing function arguments quite often. It's pretty repetitive work changing the $1, $2.... function argument numbers every time you change the order or add/remove possible arguments.
-
-The solution is to use of the "assign-then-shift"-method, which goes like this: "local -r var1=$1; shift; local -r var2=$1; shift". The idea is that you only use "$1" to assign function arguments to named (better readable) local function variables. You will never have to bother about "$2" or above. That is very useful when you constantly refactor your code and remove or add function arguments. It's something that I picked up from a colleague (a pure Bash wizard) some time ago:
-
+

I often refactor existing Bash code. That leads me to add and removing function arguments quite often. It's pretty repetitive work changing the $1, $2.... function argument numbers every time you change the order or add/remove possible arguments.

+

The solution is to use of the "assign-then-shift"-method, which goes like this: "local -r var1=$1; shift; local -r var2=$1; shift". The idea is that you only use "$1" to assign function arguments to named (better readable) local function variables. You will never have to bother about "$2" or above. That is very useful when you constantly refactor your code and remove or add function arguments. It's something that I picked up from a colleague (a pure Bash wizard) some time ago:

 some_function () {
     local -r param_foo="$1"; shift
@@ -4644,9 +4174,7 @@ some_function () {
     ...
 }
 
-
-Want to add a param_baz? Just do this:
-
+

Want to add a param_baz? Just do this:

 some_function () {
     local -r param_foo="$1"; shift
@@ -4656,9 +4184,7 @@ some_function () {
     ...
 }
 
-
-Want to remove param_foo? Nothing easier than that:
-
+

Want to remove param_foo? Nothing easier than that:

 some_function () {
     local -r param_bar="$1"; shift
@@ -4667,20 +4193,15 @@ some_function () {
     ...
 }
 
-
-As you can see, I didn't need to change any other assignments within the function. Of course, you would also need to change the function argument lists at every occasion where the function is invoked - you would do that within the same refactoring session.
-
+

As you can see, I didn't need to change any other assignments within the function. Of course, you would also need to change the function argument lists at every occasion where the function is invoked - you would do that within the same refactoring session.

Paranoid mode

-I call this the paranoid mode. The Bash will stop executing when a command exits with a status not equal to 0:
-
+

I call this the paranoid mode. The Bash will stop executing when a command exits with a status not equal to 0:

 set -e
 grep -q foo <<< bar
 echo Jo
 
-
-Here 'Jo' will never be printed out as the grep didn't find any match. It's unrealistic for most scripts to run in paranoid mode purely, so there must be a way to add exceptions. Critical Bash scripts of mine tend to look like this:
-
+

Here 'Jo' will never be printed out as the grep didn't find any match. It's unrealistic for most scripts to run in paranoid mode purely, so there must be a way to add exceptions. Critical Bash scripts of mine tend to look like this:

 #!/usr/bin/env bash
 
@@ -4703,50 +4224,38 @@ some_function () {
     ...
 }
 
-

Learned

-There are also a couple of things I've learned from Google's guide.
-
+

There are also a couple of things I've learned from Google's guide.

Unintended lexicographical comparison.

-The following looks like a valid Bash code:
-
+

The following looks like a valid Bash code:

 if [[ "${my_var}" > 3 ]]; then
     # True for 4, false for 22.
     do_something
 fi
 
-
-... but it is probably an unintended lexicographical comparison. A correct way would be:
-
+

... but it is probably an unintended lexicographical comparison. A correct way would be:

 if (( my_var > 3 )); then
     do_something
 fi
 
-
-or
-
+

or

 if [[ "${my_var}" -gt 3 ]]; then
     do_something
 fi
 
-

PIPESTATUS

-I have never used the PIPESTATUS variable before. I knew that it's there, but I never bothered to understand how it works until now thoroughly.
-
-The PIPESTATUS variable in Bash allows checking of the return code from all parts of a pipe. If it's only necessary to check the success or failure of the whole pipe, then the following is acceptable:
-
+

I have never used the PIPESTATUS variable before. I knew that it's there, but I never bothered to understand how it works until now thoroughly.

+

The PIPESTATUS variable in Bash allows checking of the return code from all parts of a pipe. If it's only necessary to check the success or failure of the whole pipe, then the following is acceptable:

 tar -cf - ./* | ( cd "${dir}" && tar -xf - )
 if (( PIPESTATUS[0] != 0 || PIPESTATUS[1] != 0 )); then
     echo "Unable to tar files to ${dir}" >&2
 fi
 
-
-However, as PIPESTATUS will be overwritten as soon as you do any other command, if you need to act differently on errors based on where it happened in the pipe, you'll need to assign PIPESTATUS to another variable immediately after running the command (don't forget that [ is a command and will wipe out PIPESTATUS).
-
+

However, as PIPESTATUS will be overwritten as soon as you do any other command, if you need to act differently on errors based on where it happened in the pipe, you'll need to assign PIPESTATUS to another variable immediately after running the command (don't forget that [ is a command and will wipe out PIPESTATUS).

 tar -cf - ./* | ( cd "${DIR}" && tar -xf - )
 return_codes=( "${PIPESTATUS[@]}" )
@@ -4757,29 +4266,19 @@ if (( return_codes[1] != 0 )); then
     do_something_else
 fi
 
-

Use common sense and BE CONSISTENT.

-The following two paragraphs are thoroughly quoted from the Google guidelines. But they hit the hammer on the head:
-
-If you are editing code, take a few minutes to look at the code around you and determine its style. If they use spaces around their if clauses, you should, too. If their comments have little boxes of stars around them, make your comments have little boxes of stars around them too.
-
-The point of having style guidelines is to have a common vocabulary of coding so people can concentrate on what you are saying rather than on how you are saying it. We present global style rules here, so people know the vocabulary. But local style is also important. If the code you add to a file looks drastically different from the existing code around it, the discontinuity throws readers out of their rhythm when they go to read it. Try to avoid this.
-
-
+

The following two paragraphs are thoroughly quoted from the Google guidelines. But they hit the hammer on the head:

+

If you are editing code, take a few minutes to look at the code around you and determine its style. If they use spaces around their if clauses, you should, too. If their comments have little boxes of stars around them, make your comments have little boxes of stars around them too.

+

The point of having style guidelines is to have a common vocabulary of coding so people can concentrate on what you are saying rather than on how you are saying it. We present global style rules here, so people know the vocabulary. But local style is also important. If the code you add to a file looks drastically different from the existing code around it, the discontinuity throws readers out of their rhythm when they go to read it. Try to avoid this.

Advanced Bash learning pro tip

-I also highly recommend having a read through the "Advanced Bash-Scripting Guide" (not from Google). I use it as the universal Bash reference and learn something new every time I look at it.
-
+

I also highly recommend having a read through the "Advanced Bash-Scripting Guide" (not from Google). I use it as the universal Bash reference and learn something new every time I look at it.

Advanced Bash-Scripting Guide
-
-Other related posts are:
-
+

Other related posts are:

2022-01-01 Bash Golf Part 2
2021-11-29 Bash Golf Part 1
2021-06-05 Gemtexter - One Bash script to rule it all
2021-05-16 Personal Bash coding style guide (You are currently reading this)
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -4797,16 +4296,11 @@ E-Mail your comments to hi@paul.cyou :-)

Welcome to the Geminispace

-Published at 2021-04-24T19:28:41+01:00; Updated at 2021-06-18
-
-ASCII Art by Andy Hood!
-
-Have you reached this article already via Gemini? It requires a Gemini client; web browsers such as Firefox, Chrome, Safari, etc., don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is:
-
+

Published at 2021-04-24T19:28:41+01:00; Updated at 2021-06-18

+

ASCII Art by Andy Hood!

+

Have you reached this article already via Gemini? It requires a Gemini client; web browsers such as Firefox, Chrome, Safari, etc., don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is:

https://foo.zone
-
-However, if you still use HTTP, you are just surfing the fallback HTML version of this capsule. In that case, I suggest reading on what this is all about :-).
-
+

However, if you still use HTTP, you are just surfing the fallback HTML version of this capsule. In that case, I suggest reading on what this is all about :-).

 
     /\
@@ -4826,33 +4320,22 @@ However, if you still use HTTP, you are just surfing the fallback HTML version o
 '.;.;' ;'.;' ..;;' AsH
 
 
-

Motivation

My urge to revamp my personal website

-For some time, I had to urge to revamp my personal website. Not to update the technology and its design but to update all the content (+ keep it current) and start a small tech blog again. So unconsciously, I began to search for an excellent platform to do all of that in a KISS (keep it simple & stupid) way.
-
+

For some time, I had to urge to revamp my personal website. Not to update the technology and its design but to update all the content (+ keep it current) and start a small tech blog again. So unconsciously, I began to search for an excellent platform to do all of that in a KISS (keep it simple & stupid) way.

My still great Laptop running hot

-Earlier this year (2021), I noticed that my almost seven-year-old but still great Laptop started to become hot and slowed down while surfing the web. Also, the Laptop's fan became quite noisy. This was all due to the additional bloat such as JavaScript, excessive use of CSS, tracking cookies+pixels, ads, and so on there was on the website.
-
-All I wanted was to read an interesting article, but after a big advertising pop-up banner appeared and made everything worse, I gave up and closed the browser tab.
-
+

Earlier this year (2021), I noticed that my almost seven-year-old but still great Laptop started to become hot and slowed down while surfing the web. Also, the Laptop's fan became quite noisy. This was all due to the additional bloat such as JavaScript, excessive use of CSS, tracking cookies+pixels, ads, and so on there was on the website.

+

All I wanted was to read an interesting article, but after a big advertising pop-up banner appeared and made everything worse, I gave up and closed the browser tab.

Discovering the Gemini internet protocol

-Around the same time, I discovered a relatively new, more lightweight protocol named Gemini, which does not support all these CPU-intensive features like HTML, JavaScript, and CSS. Also, tracking and ads are unsupported by the Gemini protocol.
-
-The "downside" is that due to the limited capabilities of the Gemini protocol, all sites look very old and spartan. But that is not a downside; that is, in fact, a design choice people made. It is up to the client software how your capsule looks. For example, you could use a graphical client, such as Lagrange, with nice font renderings and colours to improve the appearance. Or you could use a very minimalistic command line black-and-white Gemini client. It's your (the user's) choice.
-
+

Around the same time, I discovered a relatively new, more lightweight protocol named Gemini, which does not support all these CPU-intensive features like HTML, JavaScript, and CSS. Also, tracking and ads are unsupported by the Gemini protocol.

+

The "downside" is that due to the limited capabilities of the Gemini protocol, all sites look very old and spartan. But that is not a downside; that is, in fact, a design choice people made. It is up to the client software how your capsule looks. For example, you could use a graphical client, such as Lagrange, with nice font renderings and colours to improve the appearance. Or you could use a very minimalistic command line black-and-white Gemini client. It's your (the user's) choice.

Screenshot Amfora Gemini terminal client surfing this site
Screenshot graphical Lagrange Gemini client surfing this site
-
-Why is there a need for a new protocol? As the modern web is a superset of Gemini, can't we use simple HTML 1.0 instead? That's a good and valid question. It is not a technical problem but a human problem. We tend to abuse the features once they are available. You can ensure that things stay efficient and straightforward as long as you are using the Gemini protocol. On the other hand, you can't force every website on the modern web to only create plain and straightforward-looking HTML pages.
-
+

Why is there a need for a new protocol? As the modern web is a superset of Gemini, can't we use simple HTML 1.0 instead? That's a good and valid question. It is not a technical problem but a human problem. We tend to abuse the features once they are available. You can ensure that things stay efficient and straightforward as long as you are using the Gemini protocol. On the other hand, you can't force every website on the modern web to only create plain and straightforward-looking HTML pages.

My own Gemini capsule

-As it is effortless to set up and maintain your own Gemini capsule (Gemini server + content composed via the Gemtext markup language), I decided to create my own. What I like about Gemini is that I can use my favourite text editor and get typing. I don't need to worry about the style and design of the presence, and I also don't have to test anything in ten different web browsers. I can only focus on the content! As a matter of fact, I am using the Vim editor + its spellchecker + auto word completion functionality to write this.
-
-This site was generated with Gemtexter. You can read more about it here:
-
+

As it is effortless to set up and maintain your own Gemini capsule (Gemini server + content composed via the Gemtext markup language), I decided to create my own. What I like about Gemini is that I can use my favourite text editor and get typing. I don't need to worry about the style and design of the presence, and I also don't have to test anything in ten different web browsers. I can only focus on the content! As a matter of fact, I am using the Vim editor + its spellchecker + auto word completion functionality to write this.

+

This site was generated with Gemtexter. You can read more about it here:

Gemtexter - One Bash script to rule it all
-

Gemini advantages summarised

  • Supports an alternative to the modern bloated web
  • @@ -4863,22 +4346,16 @@ This site was generated with Gemtexter. You can read more about it here:
  • Supports privacy (no cookies, no request header fingerprinting, TLS encryption)
  • Fun to play with (it's a bit geeky, yes, but a lot of fun!)
-

Dive into deep Gemini space

-Check out one of the following links for more information about Gemini. For example, you will find a FAQ that explains why the protocol is named Gemini. Many Gemini capsules are dual-hosted via Gemini and HTTP(S) so that people new to Gemini can sneak peek at the content with a regular web browser. Some people go as far as tri-hosting all their content via HTTP(S), Gemini and Gopher.
-
+

Check out one of the following links for more information about Gemini. For example, you will find a FAQ that explains why the protocol is named Gemini. Many Gemini capsules are dual-hosted via Gemini and HTTP(S) so that people new to Gemini can sneak peek at the content with a regular web browser. Some people go as far as tri-hosting all their content via HTTP(S), Gemini and Gopher.

https://gemini.circumlunar.space
https://gemini.circumlunar.space
-
-Other related posts are:
-
+

Other related posts are:

2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again^2
2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again
2021-06-05 Gemtexter - One Bash script to rule it all
2021-04-24 Welcome to the Geminispace (You are currently reading this)
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -4896,50 +4373,30 @@ E-Mail your comments to hi@paul.cyou :-)

DTail - The distributed log tail program

-Published at 2021-04-22T19:28:41+01:00; Updated at 2021-04-26
-
+

Published at 2021-04-22T19:28:41+01:00; Updated at 2021-04-26

DTail logo image
-
-This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal internet site too.
-
+

This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal internet site too.

Original Mimecast Engineering Blog post at Medium
-
-Running a large cloud-based service requires monitoring the state of huge numbers of machines, a task for which many standard UNIX tools were not really designed. In this post, I will describe a simple program, DTail, that Mimecast has built and released as Open-Source, which enables us to monitor log files of many servers at once without the costly overhead of a full-blown log management system.
-
-At Mimecast, we run over 10 thousand server boxes. Most of them host multiple microservices and each of them produces log files. Even with the use of time series databases and monitoring systems, raw application logs are still an important source of information when it comes to analysing, debugging, and troubleshooting services.
-
-Every engineer familiar with UNIX or a UNIX-like platform (e.g., Linux) is well aware of tail, a command-line program for displaying a text file content on the terminal which is also especially useful for following application or system log files with tail -f logfile.
-
-Think of DTail as a distributed version of the tail program which is very useful when you have a distributed application running on many servers. DTail is an Open-Source, cross-platform, fairly easy to use, support and maintain log file analysis & statistics gathering tool designed for Engineers and Systems Administrators. It is programmed in Google Go.
-
+

Running a large cloud-based service requires monitoring the state of huge numbers of machines, a task for which many standard UNIX tools were not really designed. In this post, I will describe a simple program, DTail, that Mimecast has built and released as Open-Source, which enables us to monitor log files of many servers at once without the costly overhead of a full-blown log management system.

+

At Mimecast, we run over 10 thousand server boxes. Most of them host multiple microservices and each of them produces log files. Even with the use of time series databases and monitoring systems, raw application logs are still an important source of information when it comes to analysing, debugging, and troubleshooting services.

+

Every engineer familiar with UNIX or a UNIX-like platform (e.g., Linux) is well aware of tail, a command-line program for displaying a text file content on the terminal which is also especially useful for following application or system log files with tail -f logfile.

+

Think of DTail as a distributed version of the tail program which is very useful when you have a distributed application running on many servers. DTail is an Open-Source, cross-platform, fairly easy to use, support and maintain log file analysis & statistics gathering tool designed for Engineers and Systems Administrators. It is programmed in Google Go.

A Mimecast Pet Project

-DTail got its inspiration from public domain tools available already in this area but it is a blue sky from-scratch development which was first presented at Mimecast’s annual internal Pet Project competition (awarded with a Bronze prize). It has gained popularity since and is one of the most widely deployed DevOps tools at Mimecast (reaching nearly 10k server installations) and many engineers use it on a regular basis. The Open-Source version of DTail is available at:
-
+

DTail got its inspiration from public domain tools available already in this area but it is a blue sky from-scratch development which was first presented at Mimecast’s annual internal Pet Project competition (awarded with a Bronze prize). It has gained popularity since and is one of the most widely deployed DevOps tools at Mimecast (reaching nearly 10k server installations) and many engineers use it on a regular basis. The Open-Source version of DTail is available at:

https://dtail.dev
-
-Try it out — We would love any feedback. But first, read on…
-
+

Try it out — We would love any feedback. But first, read on…

Differentiating from log management systems

-Why not just use a full-blown log management system? There are various Open-Source and commercial log management solutions available on the market you could choose from (e.g. the ELK stack). Most of them store the logs in a centralized location and are fairly complex to set up and operate. Possibly they are also pretty expensive to operate if you have to buy dedicated hardware (or pay fees to your cloud provider) and have to hire support staff for it.
-
-DTail does not aim to replace any of the log management tools already available but is rather an additional tool crafted especially for ad-hoc debugging and troubleshooting purposes. DTail is cheap to operate as it does not require any dedicated hardware for log storage as it operates directly on the source of the logs. It means that there is a DTail server installed on all server boxes producing logs. This decentralized comes with the direct advantages that there is no introduced delay because the logs are not shipped to a central log storage device. The reduced complexity also makes it more robust against outages. You won’t be able to troubleshoot your distributed application very well if the log management infrastructure isn’t working either.
-
+

Why not just use a full-blown log management system? There are various Open-Source and commercial log management solutions available on the market you could choose from (e.g. the ELK stack). Most of them store the logs in a centralized location and are fairly complex to set up and operate. Possibly they are also pretty expensive to operate if you have to buy dedicated hardware (or pay fees to your cloud provider) and have to hire support staff for it.

+

DTail does not aim to replace any of the log management tools already available but is rather an additional tool crafted especially for ad-hoc debugging and troubleshooting purposes. DTail is cheap to operate as it does not require any dedicated hardware for log storage as it operates directly on the source of the logs. It means that there is a DTail server installed on all server boxes producing logs. This decentralized comes with the direct advantages that there is no introduced delay because the logs are not shipped to a central log storage device. The reduced complexity also makes it more robust against outages. You won’t be able to troubleshoot your distributed application very well if the log management infrastructure isn’t working either.

DTail sample session animated gif
-
-As a downside, you won’t be able to access any logs with DTail when the server is down. Furthermore, a server can store logs only up to a certain capacity as disks will fill up. For the purpose of ad-hoc debugging, these are not typically issues. Usually, it’s the application you want to debug and not the server. And disk space is rarely an issue for bare metal and VM-based systems these days, with sufficient space for several weeks’ worth of log storage being available. DTail also supports reading compressed logs. The currently supported compression algorithms are gzip and zstd.
-
+

As a downside, you won’t be able to access any logs with DTail when the server is down. Furthermore, a server can store logs only up to a certain capacity as disks will fill up. For the purpose of ad-hoc debugging, these are not typically issues. Usually, it’s the application you want to debug and not the server. And disk space is rarely an issue for bare metal and VM-based systems these days, with sufficient space for several weeks’ worth of log storage being available. DTail also supports reading compressed logs. The currently supported compression algorithms are gzip and zstd.

Combining simplicity, security and efficiency

-DTail also has a client component that connects to multiple servers concurrently for log files (or any other text files).
-
-The DTail client interacts with a DTail server on port TCP/2222 via SSH protocol and does not interact in any way with the system’s SSH server (e.g., OpenSSH Server) which might be running at port TCP/22 already. As a matter of fact, you don’t need a regular SSH server running for DTail at all. There is no support for interactive login shells at TCP/2222 either, as by design that port can only be used for text data streaming. The SSH protocol is used for the public/private key infrastructure and transport encryption only and DTail implements its own protocol on top of SSH for the features provided. There is no need to set up or buy any additional TLS certificates. The port 2222 can be easily reconfigured if you preferred to use a different one.
-
-The DTail server, which is a single static binary, will not fork an external process. This means that all features are implemented in native Go code (exception: Linux ACL support is implemented in C, but it must be enabled explicitly on compile time) and therefore helping to make it robust, secure, efficient, and easy to deploy. A single client, running on a standard Laptop, can connect to thousands of servers concurrently while still maintaining a small resource footprint.
-
-Recent log files are very likely still in the file system caches on the servers. Therefore, there tends to be a minimal I/O overhead involved.
-
+

DTail also has a client component that connects to multiple servers concurrently for log files (or any other text files).

+

The DTail client interacts with a DTail server on port TCP/2222 via SSH protocol and does not interact in any way with the system’s SSH server (e.g., OpenSSH Server) which might be running at port TCP/22 already. As a matter of fact, you don’t need a regular SSH server running for DTail at all. There is no support for interactive login shells at TCP/2222 either, as by design that port can only be used for text data streaming. The SSH protocol is used for the public/private key infrastructure and transport encryption only and DTail implements its own protocol on top of SSH for the features provided. There is no need to set up or buy any additional TLS certificates. The port 2222 can be easily reconfigured if you preferred to use a different one.

+

The DTail server, which is a single static binary, will not fork an external process. This means that all features are implemented in native Go code (exception: Linux ACL support is implemented in C, but it must be enabled explicitly on compile time) and therefore helping to make it robust, secure, efficient, and easy to deploy. A single client, running on a standard Laptop, can connect to thousands of servers concurrently while still maintaining a small resource footprint.

+

Recent log files are very likely still in the file system caches on the servers. Therefore, there tends to be a minimal I/O overhead involved.

The DTail family of commands

-Following the UNIX philosophy, DTail includes multiple command-line commands each of them for a different purpose:
-
+

Following the UNIX philosophy, DTail includes multiple command-line commands each of them for a different purpose:

  • dserver: The DTail server, the only binary required to be installed on the servers involved.
  • dtail: The distributed log tail client for following log files.
  • @@ -4947,32 +4404,21 @@ Following the UNIX philosophy, DTail includes multiple command-line commands eac
  • dgrep: The distributed grep client for searching text files for a regular expression pattern.
  • dmap: The distributed map-reduce client for aggregating stats from log files.
-
DGrep sample session animated gif
-

Usage example

-The use of these commands is almost self-explanatory for a person already used to the standard command line in Unix systems. One of the main goals is to make DTail easy to use. A tool that is too complicated to use under high-pressure scenarios (e.g., during an incident) can be quite detrimental.
-
-The basic idea is to start one of the clients from the command line and provide a list of servers to connect to with –servers. You also must provide a path of remote (log) files via –files. If you want to process multiple files per server, you could either provide a comma-separated list of file paths or make use of file system globbing (or a combination of both).
-
-The following example would connect to all DTail servers listed in the serverlist.txt, follow all files with the ending .log and filter for lines containing the string error. You can specify any Go compatible regular expression. In this example we add the case-insensitive flag to the regex:
-
+

The use of these commands is almost self-explanatory for a person already used to the standard command line in Unix systems. One of the main goals is to make DTail easy to use. A tool that is too complicated to use under high-pressure scenarios (e.g., during an incident) can be quite detrimental.

+

The basic idea is to start one of the clients from the command line and provide a list of servers to connect to with –servers. You also must provide a path of remote (log) files via –files. If you want to process multiple files per server, you could either provide a comma-separated list of file paths or make use of file system globbing (or a combination of both).

+

The following example would connect to all DTail servers listed in the serverlist.txt, follow all files with the ending .log and filter for lines containing the string error. You can specify any Go compatible regular expression. In this example we add the case-insensitive flag to the regex:

 dtail –servers serverlist.txt –files ‘/var/log/*.log’ –regex ‘(?i:error)’
 
-
-You usually want to specify a regular expression as a client argument. This will mean that responses are pre-filtered for all matching lines on the server-side and thus sending back only the relevant lines to the client. If your logs are growing very rapidly and the regex is not specific enough there might be the chance that your client is not fast enough to keep up processing all of the responses. This could be due to a network bottleneck or just as simple as a slow terminal emulator displaying the log lines on the client-side.
-
-A green 100 in the client output before each log line received from the server always indicates that there were no such problems and 100% of all log lines could be displayed on your terminal (have a look at the animated Gifs in this post). If the percentage falls below 100 it means that some of the channels used by the servers to send data to the client are congested and lines were dropped. In this case, the color will change from green to red. The user then could decide to run the same query but with a more specific regex.
-
-You could also provide a comma-separated list of servers as opposed to a text file. There are many more options you could use. The ones listed here are just the very basic ones. There are more instructions and usage examples on the GitHub page. Also, you can study even more of the available options via the –help switch (some real treasures might be hidden there).
-
+

You usually want to specify a regular expression as a client argument. This will mean that responses are pre-filtered for all matching lines on the server-side and thus sending back only the relevant lines to the client. If your logs are growing very rapidly and the regex is not specific enough there might be the chance that your client is not fast enough to keep up processing all of the responses. This could be due to a network bottleneck or just as simple as a slow terminal emulator displaying the log lines on the client-side.

+

A green 100 in the client output before each log line received from the server always indicates that there were no such problems and 100% of all log lines could be displayed on your terminal (have a look at the animated Gifs in this post). If the percentage falls below 100 it means that some of the channels used by the servers to send data to the client are congested and lines were dropped. In this case, the color will change from green to red. The user then could decide to run the same query but with a more specific regex.

+

You could also provide a comma-separated list of servers as opposed to a text file. There are many more options you could use. The ones listed here are just the very basic ones. There are more instructions and usage examples on the GitHub page. Also, you can study even more of the available options via the –help switch (some real treasures might be hidden there).

Fitting it in

-DTail integrates nicely into the user management of existing infrastructure. It follows normal system permissions and does not open new “holes” on the server which helps to keep security departments happy. The user would not have more or less file read permissions than he would have via a regular SSH login shell. There is a full SSH key, traditional UNIX permissions, and Linux ACL support. There is also a very low resource footprint involved. On average for tailing and searching log files less than 100MB RAM and less than a quarter of a CPU core per participating server are required. Complex map-reduce queries on big data sets will require more resources accordingly.
-
+

DTail integrates nicely into the user management of existing infrastructure. It follows normal system permissions and does not open new “holes” on the server which helps to keep security departments happy. The user would not have more or less file read permissions than he would have via a regular SSH login shell. There is a full SSH key, traditional UNIX permissions, and Linux ACL support. There is also a very low resource footprint involved. On average for tailing and searching log files less than 100MB RAM and less than a quarter of a CPU core per participating server are required. Complex map-reduce queries on big data sets will require more resources accordingly.

Advanced features

-The features listed here are out of the scope of this blog post but are worthwhile to mention:
-
+

The features listed here are out of the scope of this blog post but are worthwhile to mention:

  • Distributed map-reduce queries on stats provided in log files with dmap. dmap comes with its own SQL-like aggregation query language.
  • Stats streaming with continuous map-reduce queries. The difference to normal queries is that the stats are aggregated over a specified interval only on the newly written log lines. Thus, giving a de-facto live stat view for each interval.
  • @@ -4980,30 +4426,22 @@ The features listed here are out of the scope of this blog post but are worthwhi
  • Server-side stats streaming with continuous map-reduce queries. This for example can be used to periodically generate stats from the logs at a configured interval, e.g., log error counts by the minute. These then can be sent to a time-series database (e.g., Graphite) and then plotted in a Grafana dashboard.
  • Support for custom extensions. E.g., for different server discovery methods (so you don’t have to rely on plain server lists) and log file formats (so that map-reduce queries can parse more stats from the logs).
-

For the future

-There are various features we want to see in the future.
-
+

There are various features we want to see in the future.

  • A spartan mode, not printing out any extra information but the raw remote log files would be a nice feature to have. This will make it easier to post-process the data produced by the DTail client with common UNIX tools. (To some degree this is possible already, just disable the ANSI terminal color output of the client with -noColors and pipe the output to another program).
  • Tempting would be implementing the dgoawk command, a distributed version of the AWK programming language purely implemented in Go, for advanced text data stream processing capabilities. There are 3rd party libraries available implementing AWK in pure Go which could be used.
  • A more complex change would be the support of federated queries. You can connect to thousands of servers from a single client running on a laptop. But does it scale to 100k of servers? Some of the servers could be used as middleware for connecting to even more servers.
  • Another aspect is to extend the documentation. Especially the advanced features such as map-reduce query language and how to configure the server-side queries currently do require more documentation. For now, you can read the code, sample config files or just ask the author for that! But this will be certainly addressed in the future.
-

Open Source

-Mimecast highly encourages you to have a look at DTail and submit an issue for any features you would like to see. Have you found a bug? Maybe you just have a question or comment? If you want to go a step further: We would also love to see pull requests for any features or improvements. Either way, if in doubt just contact us via the DTail GitHub page.
-
+

Mimecast highly encourages you to have a look at DTail and submit an issue for any features you would like to see. Have you found a bug? Maybe you just have a question or comment? If you want to go a step further: We would also love to see pull requests for any features or improvements. Either way, if in doubt just contact us via the DTail GitHub page.

https://dtail.dev
-
-Other related posts are:
-
+

Other related posts are:

2022-10-30 Installing DTail on OpenBSD
2022-03-06 The release of DTail 4.0.0
2021-04-22 DTail - The distributed log tail program (You are currently reading this)
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -6250,20 +5688,16 @@ fib(10) = 55

Perl Daemon (Service Framework)

-Published at 2011-05-07T22:26:02+01:00; Updated at 2021-05-07
-
+

Published at 2011-05-07T22:26:02+01:00; Updated at 2021-05-07

    a'!   _,,_ a'!   _,,_     a'!   _,,_
      \\_/    \  \\_/    \      \\_/    \.-,
       \, /-( /'-,\, /-( /'-,    \, /-( /
       //\ //\\   //\ //\\       //\ //\\jrei
 
-
-PerlDaemon is a minimal daemon for Linux and other Unix like operating systems programmed in Perl. It is a minimal but pretty functional and fairly generic service framework. This means that it does not do anything useful other than providing a framework for starting, stopping, configuring and logging. To do something useful, a module (written in Perl) must be provided.
-
+

PerlDaemon is a minimal daemon for Linux and other Unix like operating systems programmed in Perl. It is a minimal but pretty functional and fairly generic service framework. This means that it does not do anything useful other than providing a framework for starting, stopping, configuring and logging. To do something useful, a module (written in Perl) must be provided.

Features

-PerlDaemon supports:
-
+

PerlDaemon supports:

  • Automatic daemonizing
  • Logging
  • @@ -6274,7 +5708,6 @@ PerlDaemon supports:
  • Easy to extend
  • Multi-instance support (just use a different directory for each instance).
-

Quick Guide

 # Starting
@@ -6286,12 +5719,9 @@ PerlDaemon supports:
# Alternatively: Starting in foreground ./bin/perldaemon start daemon.daemonize=no (or shortcut ./control foreground)
-
-To stop a daemon from running in foreground mode, "Ctrl+C" must be hit. To see more available startup options run "./control" without any argument.
-
+

To stop a daemon from running in foreground mode, "Ctrl+C" must be hit. To see more available startup options run "./control" without any argument.

How to configure

-The daemon instance can be configured in "./conf/perldaemon.conf". If you want to change a property only once, it is also possible to specify it on the command line (which will take precedence over the config file). All available config properties can be displayed via "./control keys":
-
+

The daemon instance can be configured in "./conf/perldaemon.conf". If you want to change a property only once, it is also possible to specify it on the command line (which will take precedence over the config file). All available config properties can be displayed via "./control keys":

 pb@titania:~/svn/utils/perldaemon/trunk$ ./control keys
 # Path to the logfile
@@ -6318,10 +5748,8 @@ daemon.alivefile=./run/perldaemon.alive
 # Specifies the working directory
 daemon.wd=./
 
-

Example

-So let's start the daemon with a loop interval of 10 seconds:
-
+

So let's start the daemon with a loop interval of 10 seconds:

 $ ./control keys | grep daemon.loopinterval
 daemon.loopinterval=1
@@ -6335,20 +5763,15 @@ Mon Jun 13 11:29:27 2011 (PID 2838): ExampleModule Test 2
 $ ./control stop
 Stopping daemon now...
 
-
-If you want to change that property forever, either edit perldaemon.conf or do this:
-
+

If you want to change that property forever, either edit perldaemon.conf or do this:

 $ ./control keys daemon.loopinterval=10 > new.conf; mv new.conf conf/perldaemon.conf
 
-

HiRes event loop

-PerlDaemon uses Time::HiRes to make sure that all the events run incorrect intervals. For each loop run, a time carry value is recorded and added to the next loop run to catch up on lost time.
-
+

PerlDaemon uses Time::HiRes to make sure that all the events run incorrect intervals. For each loop run, a time carry value is recorded and added to the next loop run to catch up on lost time.

Writing your own modules

Example module

-This is one of the example modules you will find in the source code. It should be pretty self-explanatory if you know Perl :-).
-
+

This is one of the example modules you will find in the source code. It should be pretty self-explanatory if you know Perl :-).

 package PerlDaemonModules::ExampleModule;
 
@@ -6380,10 +5803,8 @@ sub do ($) {
 
 1;
 
-

Your own module

-Want to give it some better use? It's just as easy as:
-
+

Want to give it some better use? It's just as easy as:

  cd ./lib/PerlDaemonModules/
  cp ExampleModule.pm YourModule.pm
@@ -6391,24 +5812,16 @@ Want to give it some better use? It's just as easy as:
cd - ./bin/perldaemon restart (or shortcurt ./control restart)
-
-Now watch ./log/perldaemon.log closely. It is a good practice to test your modules in 'foreground mode' (see above how to do that).
-
-BTW: You can install as many modules within the same instance as desired. But they are run in sequential order (in future, they can also run in parallel using several threads or processes).
-
+

Now watch ./log/perldaemon.log closely. It is a good practice to test your modules in 'foreground mode' (see above how to do that).

+

BTW: You can install as many modules within the same instance as desired. But they are run in sequential order (in future, they can also run in parallel using several threads or processes).

May the source be with you

-You can find PerlDaemon (including the examples) at:
-
+

You can find PerlDaemon (including the examples) at:

https://codeberg.org/snonux/perldaemon
-
-Other related posts are:
-
+

Other related posts are:

2022-05-27 Perl is still a great choice
2011-05-07 Perl Daemon (Service Framework) (You are currently reading this)
2008-06-26 Perl Poetry
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
@@ -7145,8 +6558,7 @@ _jgs_\|//_\\|///_\V/_\|//__

Perl Poetry

-Published at 2008-06-26T21:43:51+01:00; Updated at 2021-05-04
-
+

Published at 2008-06-26T21:43:51+01:00; Updated at 2021-05-04

  '\|/'                                  *
 -- * -----
@@ -7169,13 +6581,9 @@ _~~|~/_|_|__/|~~~~~~~ |  / ~~~~~ |   | ~~~~~~~~
              ~ ~ ~~~ _|| (_/ (___)_| |Nov291999
                     (__)         (____)
 
-
-Here are some Perl Poems I wrote. They don't do anything useful when you run them, but they don't produce a compiler error either. They only exist for fun and demonstrate what you can do with Perl syntax.
-
-Wikipedia: "Perl poetry is the practice of writing poems that can be compiled as legal Perl code, for example the piece known as Black Perl. Perl poetry is made possible by the large number of English words that are used in the Perl language. New poems are regularly submitted to the community at PerlMonks."
-
+

Here are some Perl Poems I wrote. They don't do anything useful when you run them, but they don't produce a compiler error either. They only exist for fun and demonstrate what you can do with Perl syntax.

+

Wikipedia: "Perl poetry is the practice of writing poems that can be compiled as legal Perl code, for example the piece known as Black Perl. Perl poetry is made possible by the large number of English words that are used in the Perl language. New poems are regularly submitted to the community at PerlMonks."

https://en.wikipedia.org/wiki/Perl
-

math.pl

 #!/usr/bin/perl
@@ -7218,7 +6626,6 @@ home: //ig,'nore', time and sleep $very =~ s/tr/on/g;
 __END__
 
 
-

christmas.pl

 #!/usr/bin/perl
@@ -7263,7 +6670,6 @@ __END__
 
 This is perl, v5.8.8 built for i386-freebsd-64int
 
-

shopping.pl

 #!/usr/bin/perl
@@ -7296,20 +6702,14 @@ and sleep until unpack$ing, cool products();
 __END__
 This is perl, v5.8.8 built for i386-freebsd-64int
 
-

More...

-Did you like what you saw? Have a look at Codeberg to see my other poems too:
-
+

Did you like what you saw? Have a look at Codeberg to see my other poems too:

https://codeberg.org/snonux/perl-poetry
-
-Other related posts are:
-
+

Other related posts are:

2022-05-27 Perl is still a great choice
2011-05-07 Perl Daemon (Service Framework)
2008-06-26 Perl Poetry (You are currently reading this)
-
-E-Mail your comments to hi@paul.cyou :-)
-
+

E-Mail your comments to hi@paul.cyou :-)

Back to the main site
-- cgit v1.2.3