summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPaul Buetow <paul@buetow.org>2021-05-06 09:43:24 +0100
committerPaul Buetow <git@mx.buetow.org>2021-05-21 05:11:04 +0100
commit41d0fc4a4c28dd1b0d1009dc7db15ea835c1ac12 (patch)
treee94bc5ae0847c456eb0b6de04e7df1e1c6c84f13
parentb6be618f38236f42502e03c57f2c4c210be3371e (diff)
include content to the atom feed
-rw-r--r--content/gemtext/gemfeed/atom.xml1007
-rw-r--r--content/html/gemfeed/atom.xml1007
2 files changed, 1996 insertions, 18 deletions
diff --git a/content/gemtext/gemfeed/atom.xml b/content/gemtext/gemfeed/atom.xml
index 557cf790..88ed8411 100644
--- a/content/gemtext/gemfeed/atom.xml
+++ b/content/gemtext/gemfeed/atom.xml
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
- <updated>2021-05-05T13:00:59+01:00</updated>
+ <updated>2021-05-06T09:41:34+01:00</updated>
<title>buetow.org feed</title>
<subtitle>Having fun with computers!</subtitle>
<link href="gemini://buetow.org/gemfeed/atom.xml" rel="self" />
@@ -11,87 +11,1076 @@
<link href="gemini://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi" />
<id>gemini://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi</id>
<updated>2021-04-24T19:28:41+01:00</updated>
- <summary>Have you reached this article already via Gemini? You need a special client for that, web browsers such as Firefox, Chrome, Safari etc. don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is: ... to read on visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>Have you reached this article already via Gemini? You need a special client for that, web browsers such as Firefox, Chrome, Safari etc. don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is: ... to read on visit my site.</summary>
+ <content type="text/html">
+ <h1>Welcome to the Geminispace</h1>
+<p>Have you reached this article already via Gemini? You need a special client for that, web browsers such as Firefox, Chrome, Safari etc. don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is:</p>
+<a class="textlink" href="gemini://buetow.org">gemini://buetow.org</a><br />
+<p>If you however still use HTTP then you are just surfing the fallback HTML version of this capsule. In that case I suggest reading on what this is all about :-).</p>
+<pre>
+
+ /\
+ / \
+ | |
+ |NASA|
+ | |
+ | |
+ | |
+ ' `
+ |Gemini|
+ | |
+ |______|
+ '-`'-` .
+ / . \'\ . .'
+ ''( .'\.' ' .;'
+'.;.;' ;'.;' ..;;' AsH
+
+</pre>
+<h2>Motivation</h2>
+<h3>My urge to revamp my personal website</h3>
+<p>For some time I had to urge to revamp my personal website. Not to update the technology and the design of it but to update all the content (+ keep it current) and also to start a small tech blog again. So unconsciously I started to search for a good platform and/or software to do all of that in a KISS (keep it simple & stupid) way.</p>
+<h3>My still great Laptop running hot</h3>
+<p>Earlier this year (2021) I noticed that my 6 year old but still great Laptop started to become hot and slowed down while surfing the web. Also, the Laptop's fan became quite noisy. This is all due to the additional bloat such as JavaScript, excessive use of CSS, tracking cookies+pixels, ads and so on there was on the website. </p>
+<p>All what I wanted was to read an interesting article but after a big advertising pop-up banner appeared and made everything worse I gave up and closed the browser tab.</p>
+<h2>Discovering the Gemini internet protocol</h2>
+<p>Around the same time I discovered a relatively new more lightweight protocol named Gemini which does not support all these CPU intensive features like HTML, JavaScript and CSS do. Also, tracking and ads is not supported by the Gemini protocol.</p>
+<p>The "downside" is that due to the limited capabilities of the Gemini protocol all sites look very old and spartan. But that is not really a downside, that is in fact a design choice people made. It is up to the client software how your capsule looks. For example, you could use a graphical client with nice font renderings and colors to improve the appearance. Or you could just use a very minimalistic command line black-and-white Gemini client. It's your (the user's) choice.</p>
+<i>Screenshot Amfora Gemini terminal client surfing this site:</i><a href="https://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png"><img alt="Screenshot Amfora Gemini terminal client surfing this site" title="Screenshot Amfora Gemini terminal client surfing this site" src="https://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png" /></a><br />
+<p>Why is there a need for a new protocol? As the modern web is a superset of Gemini, can't we just use simple HTML 1.0? That's a good and valid question. It is not a technical problem but a human problem. We tend to abuse the features once they are available. You can be sure that things stay simple and efficient as long as you are using the Gemini protocol. On the other hand you can't force every website in the modern web to only create plain and simple looking HTML pages.</p>
+<h2>My own Gemini capsule</h2>
+<p>As it is very easy to set up and maintain your own Gemini capsule (Gemini server + content composed via the Gemtext markup language) I decided to create my own. What I really like about Gemini is that I can just use my favorite text editor and get typing. I don't need to worry about the style and design of the presence and I also don't have to test anything in ten different web browsers. I can only focus on the content! As a matter of fact, I am using the Vim editor + it's spellchecker + auto word completion functionality to write this. </p>
+<h2>Advantages summarised</h2>
+<ul>
+<li>Supports an alternative to the modern bloated web</li>
+<li>Easy to operate and easy to write content</li>
+<li>No need to worry about various web browser compatibilities</li>
+<li>It's the client's responsibility how the content is designed+presented</li>
+<li>Lightweight (although not as lightweight as the Gopher protocol)</li>
+<li>Supports privacy (no cookies, no request header fingerprinting, TLS encryption)</li>
+<li>Fun to play with (it's a bit geeky yes, but a lot of fun!)</li>
+</ul>
+<h2>Dive into deep Gemini space</h2>
+<p>Check out one of the following links for more information about Gemini. For example, you will find a FAQ which explains why the protocol is named "Gemini". Many Gemini capsules are dual hosted via Gemini and HTTP(S), so that people new to Gemini can sneak peek the content with a normal web browser. As a matter of fact, some people go as far as tri-hosting all their content via HTTP(S), Gemini and Gopher.</p>
+<a class="textlink" href="gemini://gemini.circumlunar.space">gemini://gemini.circumlunar.space</a><br />
+<a class="textlink" href="https://gemini.circumlunar.space">https://gemini.circumlunar.space</a><br />
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>DTail - The distributed log tail program</title>
<link href="gemini://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi" />
<id>gemini://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi</id>
<updated>2021-04-22T19:28:41+01:00</updated>
- <summary>This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal Gemini capsule too. ...to read on visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal Gemini capsule too. ...to read on visit my site.</summary>
+ <content type="text/html">
+ <h1>DTail - The distributed log tail program</h1>
+<i>DTail logo image:</i><a href="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail-logo.png"><img alt="DTail logo image" title="DTail logo image" src="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail-logo.png" /></a><br />
+<p>This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal Gemini capsule too.</p>
+<a class="textlink" href="https://medium.com/mimecast-engineering/dtail-the-distributed-log-tail-program-79b8087904bb">Original Mimecast Engineering Blog post at Medium</a><br />
+<p>Running a large cloud-based service requires monitoring the state of huge numbers of machines, a task for which many standard UNIX tools were not really designed. In this post, I will describe a simple program, DTail, that Mimecast has built and released as Open-Source, which enables us to monitor log files of many servers at once without the costly overhead of a full-blown log management system.</p>
+<p>At Mimecast, we run over 10 thousand server boxes. Most of them host multiple microservices and each of them produces log files. Even with the use of time series databases and monitoring systems, raw application logs are still an important source of information when it comes to analysing, debugging, and troubleshooting services.</p>
+<p>Every engineer familiar with UNIX or a UNIX-like platform (e.g., Linux) is well aware of tail, a command-line program for displaying a text file content on the terminal which is also especially useful for following application or system log files with tail -f logfile.</p>
+<p>Think of DTail as a distributed version of the tail program which is very useful when you have a distributed application running on many servers. DTail is an Open-Source, cross-platform, fairly easy to use, support and maintain log file analysis & statistics gathering tool designed for Engineers and Systems Administrators. It is programmed in Google Go.</p>
+<h2>A Mimecast Pet Project</h2>
+<p>DTail got its inspiration from public domain tools available already in this area but it is a blue sky from-scratch development which was first presented at Mimecast’s annual internal Pet Project competition (awarded with a Bronze prize). It has gained popularity since and is one of the most widely deployed DevOps tools at Mimecast (reaching nearly 10k server installations) and many engineers use it on a regular basis. The Open-Source version of DTail is available at:</p>
+<a class="textlink" href="https://dtail.dev">https://dtail.dev</a><br />
+<p>Try it out — We would love any feedback. But first, read on…</p>
+<h2>Differentiating from log management systems</h2>
+<p>Why not just use a full-blown log management system? There are various Open-Source and commercial log management solutions available on the market you could choose from (e.g. the ELK stack). Most of them store the logs in a centralized location and are fairly complex to set up and operate. Possibly they are also pretty expensive to operate if you have to buy dedicated hardware (or pay fees to your cloud provider) and have to hire support staff for it.</p>
+<p>DTail does not aim to replace any of the log management tools already available but is rather an additional tool crafted especially for ad-hoc debugging and troubleshooting purposes. DTail is cheap to operate as it does not require any dedicated hardware for log storage as it operates directly on the source of the logs. It means that there is a DTail server installed on all server boxes producing logs. This decentralized comes with the direct advantages that there is no introduced delay because the logs are not shipped to a central log storage device. The reduced complexity also makes it more robust against outages. You won’t be able to troubleshoot your distributed application very well if the log management infrastructure isn’t working either.</p>
+<i>DTail sample session animated gif:</i><a href="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif"><img alt="DTail sample session animated gif" title="DTail sample session animated gif" src="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif" /></a><br />
+<p>As a downside, you won’t be able to access any logs with DTail when the server is down. Furthermore, a server can store logs only up to a certain capacity as disks will fill up. For the purpose of ad-hoc debugging, these are not typically issues. Usually, it’s the application you want to debug and not the server. And disk space is rarely an issue for bare metal and VM-based systems these days, with sufficient space for several weeks’ worth of log storage being available. DTail also supports reading compressed logs. The currently supported compression algorithms are gzip and zstd.</p>
+<h2>Combining simplicity, security and efficiency</h2>
+<p>DTail also has a client component that connects to multiple servers concurrently for log files (or any other text files).</p>
+<p>The DTail client interacts with a DTail server on port TCP/2222 via SSH protocol and does not interact in any way with the system’s SSH server (e.g., OpenSSH Server) which might be running at port TCP/22 already. As a matter of fact, you don’t need a regular SSH server running for DTail at all. There is no support for interactive login shells at TCP/2222 either, as by design that port can only be used for text data streaming. The SSH protocol is used for the public/private key infrastructure and transport encryption only and DTail implements its own protocol on top of SSH for the features provided. There is no need to set up or buy any additional TLS certificates. The port 2222 can be easily reconfigured if you preferred to use a different one.</p>
+<p>The DTail server, which is a single static binary, will not fork an external process. This means that all features are implemented in native Go code (exception: Linux ACL support is implemented in C, but it must be enabled explicitly on compile time) and therefore helping to make it robust, secure, efficient, and easy to deploy. A single client, running on a standard Laptop, can connect to thousands of servers concurrently while still maintaining a small resource footprint.</p>
+<p>Recent log files are very likely still in the file system caches on the servers. Therefore, there tends to be a minimal I/O overhead involved.</p>
+<h2>The DTail family of commands</h2>
+<p>Following the UNIX philosophy, DTail includes multiple command-line commands each of them for a different purpose:</p>
+<ul>
+<li>dserver: The DTail server, the only binary required to be installed on the servers involved.</li>
+<li>dtail: The distributed log tail client for following log files.</li>
+<li>dcat: The distributed cat client for concatenating and displaying text files.</li>
+<li>dgrep: The distributed grep client for searching text files for a regular expression pattern.</li>
+<li>dmap: The distributed map-reduce client for aggregating stats from log files.</li>
+</ul>
+<i>DGrep sample session animated gif:</i><a href="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif"><img alt="DGrep sample session animated gif" title="DGrep sample session animated gif" src="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif" /></a><br />
+<h2>Usage example</h2>
+<p>The use of these commands is almost self-explanatory for a person already used to the standard command line in Unix systems. One of the main goals is to make DTail easy to use. A tool that is too complicated to use under high-pressure scenarios (e.g., during an incident) can be quite detrimental.</p>
+<p>The basic idea is to start one of the clients from the command line and provide a list of servers to connect to with –servers. You also must provide a path of remote (log) files via –files. If you want to process multiple files per server, you could either provide a comma-separated list of file paths or make use of file system globbing (or a combination of both).</p>
+<p>The following example would connect to all DTail servers listed in the serverlist.txt, follow all files with the ending .log and filter for lines containing the string error. You can specify any Go compatible regular expression. In this example we add the case-insensitive flag to the regex:</p>
+<pre>
+dtail –servers serverlist.txt –files ‘/var/log/*.log’ –regex ‘(?i:error)’
+</pre>
+<p>You usually want to specify a regular expression as a client argument. This will mean that responses are pre-filtered for all matching lines on the server-side and thus sending back only the relevant lines to the client. If your logs are growing very rapidly and the regex is not specific enough there might be the chance that your client is not fast enough to keep up processing all of the responses. This could be due to a network bottleneck or just as simple as a slow terminal emulator displaying the log lines on the client-side.</p>
+<p>A green 100 in the client output before each log line received from the server always indicates that there were no such problems and 100% of all log lines could be displayed on your terminal (have a look at the animated Gifs in this post). If the percentage falls below 100 it means that some of the channels used by the servers to send data to the client are congested and lines were dropped. In this case, the color will change from green to red. The user then could decide to run the same query but with a more specific regex.</p>
+<p>You could also provide a comma-separated list of servers as opposed to a text file. There are many more options you could use. The ones listed here are just the very basic ones. There are more instructions and usage examples on the GitHub page. Also, you can study even more of the available options via the –help switch (some real treasures might be hidden there).</p>
+<h2>Fitting it in</h2>
+<p>DTail integrates nicely into the user management of existing infrastructure. It follows normal system permissions and does not open new “holes” on the server which helps to keep security departments happy. The user would not have more or less file read permissions than he would have via a regular SSH login shell. There is a full SSH key, traditional UNIX permissions, and Linux ACL support. There is also a very low resource footprint involved. On average for tailing and searching log files less than 100MB RAM and less than a quarter of a CPU core per participating server are required. Complex map-reduce queries on big data sets will require more resources accordingly.</p>
+<h2>Advanced features</h2>
+<p>The features listed here are out of the scope of this blog post but are worthwhile to mention:</p>
+<ul>
+<li>Distributed map-reduce queries on stats provided in log files with dmap. dmap comes with its own SQL-like aggregation query language.</li>
+<li>Stats streaming with continuous map-reduce queries. The difference to normal queries is that the stats are aggregated over a specified interval only on the newly written log lines. Thus, giving a de-facto live stat view for each interval.</li>
+<li>Server-side scheduled queries on log files. The queries are configured in the DTail server configuration file and scheduled at certain time intervals. Results are written to CSV files. This is useful for generating daily stats from the log files without the need for an interactive client.</li>
+<li>Server-side stats streaming with continuous map-reduce queries. This for example can be used to periodically generate stats from the logs at a configured interval, e.g., log error counts by the minute. These then can be sent to a time-series database (e.g., Graphite) and then plotted in a Grafana dashboard.</li>
+<li>Support for custom extensions. E.g., for different server discovery methods (so you don’t have to rely on plain server lists) and log file formats (so that map-reduce queries can parse more stats from the logs).</li>
+</ul>
+<h2>For the future</h2>
+<p>There are various features we want to see in the future.</p>
+<ul>
+<li>A spartan mode, not printing out any extra information but the raw remote log files would be a nice feature to have. This will make it easier to post-process the data produced by the DTail client with common UNIX tools. (To some degree this is possible already, just disable the ANSI terminal color output of the client with -noColors and pipe the output to another program).</li>
+<li>Tempting would be implementing the dgoawk command, a distributed version of the AWK programming language purely implemented in Go, for advanced text data stream processing capabilities. There are 3rd party libraries available implementing AWK in pure Go which could be used.</li>
+<li>A more complex change would be the support of federated queries. You can connect to thousands of servers from a single client running on a laptop. But does it scale to 100k of servers? Some of the servers could be used as middleware for connecting to even more servers.</li>
+<li>Another aspect is to extend the documentation. Especially the advanced features such as map-reduce query language and how to configure the server-side queries currently do require more documentation. For now, you can read the code, sample config files or just ask the author for that! But this will be certainly addressed in the future.</li>
+</ul>
+<h2>Open Source</h2>
+<p>Mimecast highly encourages you to have a look at DTail and submit an issue for any features you would like to see. Have you found a bug? Maybe you just have a question or comment? If you want to go a step further: We would also love to see pull requests for any features or improvements. Either way, if in doubt just contact us via the DTail GitHub page.</p>
+<a class="textlink" href="https://dtail.dev">https://dtail.dev</a><br />
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>Methods in C</title>
<link href="gemini://buetow.org/gemfeed/2016-11-20-methods-in-c.gmi" />
<id>gemini://buetow.org/gemfeed/2016-11-20-methods-in-c.gmi</id>
<updated>2016-11-20T18:36:51+01:00</updated>
- <summary>You can do some sort of object oriented programming in the C Programming Language. However, that is very limited. But also very easy and straight forward to use.. .....to read on please visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>You can do some sort of object oriented programming in the C Programming Language. However, that is very limited. But also very easy and straight forward to use.. .....to read on please visit my site.</summary>
+ <content type="text/html">
+ <h1>Methods in C</h1>
+<p>You can do some sort of object oriented programming in the C Programming Language. However, that is very limited. But also very easy and straight forward to use.</p>
+<h2>Example</h2>
+<p>Lets have a look at the following sample program. Basically all you have to do is to add a function pointer such as "calculate" to the definition of struct "something_s". Later, during the struct initialization, assign a function address to that function pointer:</p>
+<pre>
+#include &lt;stdio.h&gt;
+
+typedef struct {
+ double (*calculate)(const double, const double);
+ char *name;
+} something_s;
+
+double multiplication(const double a, const double b) {
+ return a * b;
+}
+
+double division(const double a, const double b) {
+ return a / b;
+}
+
+int main(void) {
+ something_s mult = (something_s) {
+ .calculate = multiplication,
+ .name = "Multiplication"
+ };
+
+ something_s div = (something_s) {
+ .calculate = division,
+ .name = "Division"
+ };
+
+ const double a = 3, b = 2;
+
+ printf("%s(%f, %f) =&gt; %f\n", mult.name, a, b, mult.calculate(a,b));
+ printf("%s(%f, %f) =&gt; %f\n", div.name, a, b, div.calculate(a,b));
+}
+</pre>
+<p>As you can see you can call the function (pointed by the function pointer) the same way as in C++ or Java via:</p>
+<pre>
+printf("%s(%f, %f) =&gt; %f\n", mult.name, a, b, mult.calculate(a,b));
+printf("%s(%f, %f) =&gt; %f\n", div.name, a, b, div.calculate(a,b));
+</pre>
+<p>However, that's just syntactic sugar for:</p>
+<pre>
+printf("%s(%f, %f) =&gt; %f\n", mult.name, a, b, (*mult.calculate)(a,b));
+printf("%s(%f, %f) =&gt; %f\n", div.name, a, b, (*div.calculate)(a,b));
+</pre>
+<p>Output:</p>
+<pre>
+pbuetow ~/git/blog/source [38268]% gcc methods-in-c.c -o methods-in-c
+pbuetow ~/git/blog/source [38269]% ./methods-in-c
+Multiplication(3.000000, 2.000000) =&gt; 6.000000
+Division(3.000000, 2.000000) =&gt; 1.500000
+</pre>
+<p>Not complicated at all, but nice to know and helps to make the code easier to read!</p>
+<h2>The flaw</h2>
+<p>That's actually not really how it works in object oriented languages such as Java and C++. The method call in this example is not really a method call as "mult" and "div" in this example are not "message receivers". What I mean by that is that the functions can not access the state of the "mult" and "div" struct objects. In C you would need to do something like this instead if you wanted to access the state of "mult" from within the calculate function, you would have to pass it as an argument:</p>
+<pre>
+mult.calculate(mult,a,b));
+</pre>
+<p>How to overcome this? You need to take it further...</p>
+<h2>Taking it further</h2>
+<p>If you want to take it further type "Object-Oriented Programming with ANSI-C" into your favorite internet search engine, you will find some crazy stuff. Some go as far as writing a C preprocessor in AWK, which takes some object oriented pseudo-C and transforms it to plain C so that the C compiler can compile it to machine code. This is actually similar to how the C++ language had its origins.</p>
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>Spinning up my own authoritative DNS servers</title>
<link href="gemini://buetow.org/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi" />
<id>gemini://buetow.org/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi</id>
<updated>2016-05-22T18:59:01+01:00</updated>
- <summary>Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains 'buetow.org' and 'buetow.zone'. My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now I am making use of that option.. .....to read on please visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains 'buetow.org' and 'buetow.zone'. My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now I am making use of that option.. .....to read on please visit my site.</summary>
+ <content type="text/html">
+ <h1>Spinning up my own authoritative DNS servers</h1>
+<h2>Background</h2>
+<p>Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains "buetow.org" and "buetow.zone". My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now, I am making use of that option.</p>
+<a class="textlink" href="http://www.schlundtech.de">Schlund Technologies</a><br />
+<h2>All FreeBSD Jails</h2>
+<p>In order to set up my authoritative DNS servers I installed a FreeBSD Jail dedicated for DNS with Puppet on my root machine as follows:</p>
+<pre>
+include freebsd
+
+freebsd::ipalias { '2a01:4f8:120:30e8::14':
+ ensure =&gt; up,
+ proto =&gt; 'inet6',
+ preflen =&gt; '64',
+ interface =&gt; 're0',
+ aliasnum =&gt; '5',
+}
+
+include jail::freebsd
+
+class { 'jail':
+ ensure =&gt; present,
+ jails_config =&gt; {
+ dns =&gt; {
+ '_ensure' =&gt; present,
+ '_type' =&gt; 'freebsd',
+ '_mirror' =&gt; 'ftp://ftp.de.freebsd.org',
+ '_remote_path' =&gt; 'FreeBSD/releases/amd64/10.1-RELEASE',
+ '_dists' =&gt; [ 'base.txz', 'doc.txz', ],
+ '_ensure_directories' =&gt; [ '/opt', '/opt/enc' ],
+ 'host.hostname' =&gt; "'dns.ian.buetow.org'",
+ 'ip4.addr' =&gt; '192.168.0.15',
+ 'ip6.addr' =&gt; '2a01:4f8:120:30e8::15',
+ },
+ .
+ .
+ }
+}
+</pre>
+<h2>PF firewall</h2>
+<p>Please note that "dns.ian.buetow.org" is just the Jail name of the master DNS server (and "caprica.ian.buetow.org" the name of the Jail for the slave DNS server) and that I am using the DNS names "dns1.buetow.org" (master) and "dns2.buetow.org" (slave) for the actual service names (these are the DNS servers visible to the public). Please also note that the IPv4 address is an internal one. I have a PF to use NAT and PAT. The DNS ports are being forwarded (TCP and UDP) to that Jail. By default, all ports are blocked, so I am adding an exception rule for the IPv6 address as well. These are the PF rules in use:</p>
+<pre>
+% cat /etc/pf.conf
+.
+.
+# dns.ian.buetow.org
+rdr pass on re0 proto tcp from any to $pub_ip port {53} -&gt; 192.168.0.15
+rdr pass on re0 proto udp from any to $pub_ip port {53} -&gt; 192.168.0.15
+pass in on re0 inet6 proto tcp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state
+pass in on re0 inet6 proto udp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state
+.
+.
+</pre>
+<h2>Puppet managed BIND zone files</h2>
+<p>In "manifests/dns.pp" (the Puppet manifest for the Master DNS Jail itself) I configured the BIND DNS server this way:</p>
+<pre>
+class { 'bind_freebsd':
+ config =&gt; "puppet:///files/bind/named.${::hostname}.conf",
+ dynamic_config =&gt; "puppet:///files/bind/dynamic.${::hostname}",
+}
+</pre>
+<p>The Puppet module is actually a pretty simple one. It installs the file "/usr/local/etc/named/named.conf" and it populates the "/usr/local/etc/named/dynamicdb" directory with all my zone files.</p>
+<p>Once (Puppet-) applied inside of the Jail I get this:</p>
+<pre>
+paul uranus:~/git/blog/source [4268]% ssh admin@dns1.buetow.org.buetow.org pgrep -lf named
+60748 /usr/local/sbin/named -u bind -c /usr/local/etc/namedb/named.conf
+paul uranus:~/git/blog/source [4269]% ssh admin@dns1.buetow.org.buetow.org tail -n 13 /usr/local/etc/namedb/named.conf
+zone "buetow.org" {
+ type master;
+ notify yes;
+ allow-update { key "buetoworgkey"; };
+ file "/usr/local/etc/namedb/dynamic/buetow.org";
+};
+
+zone "buetow.zone" {
+ type master;
+ notify yes;
+ allow-update { key "buetoworgkey"; };
+ file "/usr/local/etc/namedb/dynamic/buetow.zone";
+};
+paul uranus:~/git/blog/source [4277]% ssh admin@dns1.buetow.org.buetow.org cat /usr/local/etc/namedb/dynamic/buetow.org
+$TTL 3600
+@ IN SOA dns1.buetow.org. domains.buetow.org. (
+ 25 ; Serial
+ 604800 ; Refresh
+ 86400 ; Retry
+ 2419200 ; Expire
+ 604800 ) ; Negative Cache TTL
+; Infrastructure domains
+@ IN NS dns1
+@ IN NS dns2
+* 300 IN CNAME web.ian
+buetow.org. 86400 IN A 78.46.80.70
+buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:11
+buetow.org. 86400 IN MX 10 mail.ian
+dns1 86400 IN A 78.46.80.70
+dns1 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:15
+dns2 86400 IN A 164.177.171.32
+dns2 86400 IN AAAA 2a03:2500:1:6:20::
+.
+.
+.
+.
+</pre>
+<p>That is my master DNS server. My slave DNS server runs in another Jail on another bare metal machine. Everything is set up similar to the master DNS server. However, that server is located in a different DC and in different IP subnets. The only difference is the "named.conf". It's configured to be a slave and that means that the "dynamicdb" gets populated by BIND itself while doing zone transfers from the master.</p>
+<pre>
+paul uranus:~/git/blog/source [4279]% ssh admin@dns2.buetow.org tail -n 11 /usr/local/etc/namedb/named.conf
+zone "buetow.org" {
+ type slave;
+ masters { 78.46.80.70; };
+ file "/usr/local/etc/namedb/dynamic/buetow.org";
+};
+
+zone "buetow.zone" {
+ type slave;
+ masters { 78.46.80.70; };
+ file "/usr/local/etc/namedb/dynamic/buetow.zone";
+};
+</pre>
+<h2>The end result</h2>
+<p>The end result looks like this now:</p>
+<pre>
+% dig -t ns buetow.org
+; &lt;&lt;&gt;&gt; DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 &lt;&lt;&gt;&gt; -t ns buetow.org
+;; global options: +cmd
+;; Got answer:
+;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 37883
+;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
+
+;; OPT PSEUDOSECTION:
+; EDNS: version: 0, flags:; udp: 512
+;; QUESTION SECTION:
+;buetow.org. IN NS
+
+;; ANSWER SECTION:
+buetow.org. 600 IN NS dns2.buetow.org.
+buetow.org. 600 IN NS dns1.buetow.org.
+
+;; Query time: 41 msec
+;; SERVER: 192.168.1.254#53(192.168.1.254)
+;; WHEN: Sun May 22 11:34:11 BST 2016
+;; MSG SIZE rcvd: 77
+
+% dig -t any buetow.org @dns1.buetow.org
+; &lt;&lt;&gt;&gt; DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 &lt;&lt;&gt;&gt; -t any buetow.org @dns1.buetow.org
+;; global options: +cmd
+;; Got answer:
+;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 49876
+;; flags: qr aa rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 7
+
+;; OPT PSEUDOSECTION:
+; EDNS: version: 0, flags:; udp: 4096
+;; QUESTION SECTION:
+;buetow.org. IN ANY
+
+;; ANSWER SECTION:
+buetow.org. 86400 IN A 78.46.80.70
+buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::11
+buetow.org. 86400 IN MX 10 mail.ian.buetow.org.
+buetow.org. 3600 IN SOA dns1.buetow.org. domains.buetow.org. 25 604800 86400 2419200 604800
+buetow.org. 3600 IN NS dns2.buetow.org.
+buetow.org. 3600 IN NS dns1.buetow.org.
+
+;; ADDITIONAL SECTION:
+mail.ian.buetow.org. 86400 IN A 78.46.80.70
+dns1.buetow.org. 86400 IN A 78.46.80.70
+dns2.buetow.org. 86400 IN A 164.177.171.32
+mail.ian.buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::12
+dns1.buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::15
+dns2.buetow.org. 86400 IN AAAA 2a03:2500:1:6:20::
+
+;; Query time: 42 msec
+;; SERVER: 78.46.80.70#53(78.46.80.70)
+;; WHEN: Sun May 22 11:34:41 BST 2016
+;; MSG SIZE rcvd: 322
+</pre>
+<h2>Monitoring</h2>
+<p>For monitoring I am using Icinga2 (I am operating two Icinga2 instances in two different DCs). I may have to post another blog article about Icinga2 but to get the idea these were the snippets added to my Icinga2 configuration:</p>
+<pre>
+apply Service "dig" {
+ import "generic-service"
+
+ check_command = "dig"
+ vars.dig_lookup = "buetow.org"
+ vars.timeout = 30
+
+ assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org"
+}
+
+apply Service "dig6" {
+ import "generic-service"
+
+ check_command = "dig"
+ vars.dig_lookup = "buetow.org"
+ vars.timeout = 30
+ vars.check_ipv6 = true
+
+ assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org"
+}
+</pre>
+<h2>DNS update workflow</h2>
+<p>Whenever I have to change a DNS entry all have to do is:</p>
+<ul>
+<li>Git clone or update the Puppet repository</li>
+<li>Update/commit and push the zone file (e.g. "buetow.org")</li>
+<li>Wait for Puppet. Puppet will deploy that updated zone file. And it will reload the BIND server.</li>
+<li>The BIND server will notify all slave DNS servers (at the moment only one). And it will transfer the new version of the zone.</li>
+</ul>
+<p>That's much more comfortable now than manually clicking at some web UIs at Schlund Technologies.</p>
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>Offsite backup with ZFS (Part 2)</title>
<link href="gemini://buetow.org/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi" />
<id>gemini://buetow.org/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.gmi</id>
<updated>2016-04-16T22:43:42+01:00</updated>
- <summary>I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of different brands. One drive is kept at the secret location. The other one is kept at home right next to my HP MicroServer. ...to read on visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of different brands. One drive is kept at the secret location. The other one is kept at home right next to my HP MicroServer. ...to read on visit my site.</summary>
+ <content type="text/html">
+ <h1>Offsite backup with ZFS (Part 2)</h1>
+<pre>
+ ________________
+|# : : #|
+| : ZFS/GELI : |________________
+| : Offsite : |# : : #|
+| : Backup 1 : | : ZFS/GELI : |
+| :___________: | : Offsite : |
+| _________ | : Backup 2 : |
+| | __ | | :___________: |
+| || | | | _________ |
+\____||__|_____|_| | __ | |
+ | || | | |
+ \____||__|_____|__|
+</pre>
+<a class="textlink" href="https://buetow.org/gemfeed/2016-04-03-offsite-backup-with-zfs.html">Read the first part before reading any furter here...</a><br />
+<p>I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of different brands. One drive is kept at the secret location. The other one is kept at home right next to my HP MicroServer.</p>
+<p>Whenever I am updating offsite backup, I am doing it to the drive which is kept locally. Afterwards I bring it to the secret location and swap the drives and bring the other one back home. This ensures that I will always have an offiste backup available at a different location than my home - even while updating one copy of it.</p>
+<p>Furthermore, I added scrubbing (*zpool scrub...*) to the script. It ensures that the file system is consistent and that there are no bad blocks on the disk and the file system. To increase the reliability I also run a *zfs set copies=2 zroot*. That setting is also synchronized to the offsite ZFS pool. ZFS stores every data block to disk twice now. Yes, it consumes twice as much disk space but it makes it better fault tolerant against hardware errors (e.g. only individual disk sectors going bad). </p>
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>Offsite backup with ZFS</title>
<link href="gemini://buetow.org/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi" />
<id>gemini://buetow.org/gemfeed/2016-04-03-offsite-backup-with-zfs.gmi</id>
<updated>2016-04-03T22:43:42+01:00</updated>
- <summary>When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....). ...to read on visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....). ...to read on visit my site.</summary>
+ <content type="text/html">
+ <h1>Offsite backup with ZFS</h1>
+<pre>
+ ________________
+|# : : #|
+| : ZFS/GELI : |
+| : Offsite : |
+| : Backup : |
+| :___________: |
+| _________ |
+| | __ | |
+| || | | |
+\____||__|_____|__|
+</pre>
+<h2>Please don't lose all my pictures again!</h2>
+<p>When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....).</p>
+<p>A little about my personal infrastructure: I am running my own (mostly FreeBSD based) root servers (across several countries: Two in Germany, one in Canada, one in Bulgaria) which store all my online data (E-Mail and my Git repositories). I am syncing incremental (and encrypted) ZFS snapshots between these servers forth and back so either data could be recovered from the other server.</p>
+<h2>Local storage box for offline data</h2>
+<p>Also, I am operating a local server (an HP MicroServer) at home in my apartment. Full snapshots of all ZFS volumes are pulled from the "online" servers to the local server every other week and the incremental ZFS snapshots every day. That local server has a ZFS ZMIRROR with 3 disks configured (local triple redundancy). I keep up to half a year worth of ZFS snapshots of all volumes. That local server also contains all my offline data such as pictures, private documents, videos, books, various other backups, etc.</p>
+<p>Once weekly all the data of that local server is copied to two external USB drives as a backup (without the historic snapshots). For simplicity these USB drives are not formatted with ZFS but with good old UFS. This gives me a chance to recover from a (potential) ZFS disaster. ZFS is a complex thing. Sometimes it is good not to trust complex things!</p>
+<h2>Storing it at my apartment is not enough</h2>
+<p>Now I am thinking about an offsite backup of all this local data. The problem is, that all the data remains on a single physical location: My local MicroServer. What happens when the house burns or someone steals my server including the internal disks and the attached USB drives? My first thought was to back up everything to the "cloud". The major issue here is however the limited amount of available upload bandwidth (only 1MBit/s).</p>
+<p>The solution is adding another USB drive (2TB) with an encryption container (GELI) and a ZFS pool on it. The GELI encryption requires a secret key and a secret passphrase. I am updating the data to that drive once every 3 months (my calendar is reminding me about it) and afterwards I keep that drive at a secret location outside of my apartment. All the information needed to decrypt (mounting the GELI container) is stored at another (secure) place. Key and passphrase are kept at different places though. Even if someone would know of it, he would not be able to decrypt it as some additional insider knowledge would be required as well.</p>
+<h2>Walking one round less</h2>
+<p>I am thinking of buying a second 2TB USB drive and to set it up the same way as the first one. So I could alternate the backups. One drive would be at the secret location, and the other drive would be at home. And these drives would swap location after each cycle. This would give some security about the failure of that drive and I would have to go to the secret location only once (swapping the drives) instead of twice (picking that drive up in order to update the data + bringing it back to the secret location).</p>
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>The Fype Programming Language</title>
<link href="gemini://buetow.org/gemfeed/2010-05-09-the-fype-programming-language.gmi" />
<id>gemini://buetow.org/gemfeed/2010-05-09-the-fype-programming-language.gmi</id>
<updated>2010-05-09T12:48:29+01:00</updated>
- <summary>Fype is an interpreted programming language created by me for learning and fun. The interpreter is written in C. It has been tested on FreeBSD and NetBSD and may also work on other Unix like operating systems such as Linux based ones. To be honest, besides learning and fun there is really no other use case of why Fype actually exists as many other programming languages are much faster and more powerful.. .....to read on please visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>Fype is an interpreted programming language created by me for learning and fun. The interpreter is written in C. It has been tested on FreeBSD and NetBSD and may also work on other Unix like operating systems such as Linux based ones. To be honest, besides learning and fun there is really no other use case of why Fype actually exists as many other programming languages are much faster and more powerful.. .....to read on please visit my site.</summary>
+ <content type="text/html">
+ <h1>The Fype Programming Language</h1>
+<p>Fype is an interpreted programming language created by me for learning and fun. The interpreter is written in C. It has been tested on FreeBSD and NetBSD and may also work on other Unix like operating systems such as Linux based ones. To be honest, besides learning and fun there is really no other use case of why Fype actually exists as many other programming languages are much faster and more powerful.</p>
+<p>The Fype syntax is very simple and is using a maximum look ahead of 1 and a very easy top down parsing mechanism. Fype is parsing and interpreting its code simultaneously. This means, that syntax errors are only detected during program runtime. </p>
+<p>Fype is a recursive acronym and means "Fype is For Your Program Execution" or "Fype is Free Yak Programmed for ELF". You could also say "It's not a hype - it's Fype!".</p>
+<h2>Object oriented C style</h2>
+<p>The Fype interpreter is written in an object oriented style of C. Each "main component" has its own .h and .c file. There is a struct type for each (most components at least) component which can be initialized using a "COMPONENT_new" function and destroyed using a "COMPONENT_delete" function. Method calls follow the same schema, e.g. "COMPONENT_METHODNAME". There is no such as class inheritance and polymorphism involved. </p>
+<p>To give you an idea how it works here as an example is a snippet from the main Fype "class header":</p>
+<pre>
+typedef struct {
+ Tupel *p_tupel_argv; // Contains command line options
+ List *p_list_token; // Initial list of token
+ Hash *p_hash_syms; // Symbol table
+ char *c_basename;
+} Fype;
+</pre>
+<p>And here is a snippet from the main Fype "class implementation":</p>
+<pre>
+Fype*
+fype_new() {
+ Fype *p_fype = malloc(sizeof(Fype));
+
+ p_fype-&gt;p_hash_syms = hash_new(512);
+ p_fype-&gt;p_list_token = list_new();
+ p_fype-&gt;p_tupel_argv = tupel_new();
+ p_fype-&gt;c_basename = NULL;
+
+ garbage_init();
+
+ return (p_fype);
+}
+
+void
+fype_delete(Fype *p_fype) {
+ argv_tupel_delete(p_fype-&gt;p_tupel_argv);
+
+ hash_iterate(p_fype-&gt;p_hash_syms, symbol_cleanup_hash_syms_cb);
+ hash_delete(p_fype-&gt;p_hash_syms);
+
+ list_iterate(p_fype-&gt;p_list_token, token_ref_down_cb);
+ list_delete(p_fype-&gt;p_list_token);
+
+ if (p_fype-&gt;c_basename)
+ free(p_fype-&gt;c_basename);
+
+ garbage_destroy();
+}
+
+int
+fype_run(int i_argc, char **pc_argv) {
+ Fype *p_fype = fype_new();
+
+ // argv: Maintains command line options
+ argv_run(p_fype, i_argc, pc_argv);
+
+ // scanner: Creates a list of token
+ scanner_run(p_fype);
+
+ // interpret: Interpret the list of token
+ interpret_run(p_fype);
+
+ fype_delete(p_fype);
+
+ return (0);
+}
+</pre>
+<h2>Data types</h2>
+<p>Fype uses auto type conversion. However, if you want to know what's going on you may take a look at the following basic data types:</p>
+<ul>
+<li>integer - Specifies a number</li>
+<li>double - Specifies a double precision number</li>
+<li>string - Specifies a string</li>
+<li>number - May be an integer or a double number</li>
+<li>any- May be any type above</li>
+<li>void - No type</li>
+<li>identifier - It's a variable name or a procedure name or a function name</li>
+</ul>
+<p>There is no boolean type, but we can use the integer values 0 for false and 1 for true. There is support for explicit type casting too.</p>
+<h2>Syntax</h2>
+<h3>Comments</h3>
+<p>Text from a # character until the end of the current line is considered being a comment. Multi line comments may start with an #* and with a *# anywhere. Exceptions are if those signs are inside of strings.</p>
+<h3>Variables</h3>
+<p>Variables can be defined with the "my" keyword (inspired by Perl :-). If you don't assign a value during declaration, then it's using the default integer value 0. Variables may be changed during program runtime. Variables may be deleted using the "undef" keyword! Example:</p>
+<pre>
+my foo = 1 + 2;
+say foo;
+
+my bar = 12, baz = foo;
+say 1 + bar;
+say bar;
+
+my baz;
+say baz; # Will print out 0
+</pre>
+<p>You may use the "defined" keyword to check if an identifier has been defined or not:</p>
+<pre>
+ifnot defined foo {
+ say "No foo yet defined";
+}
+
+my foo = 1;
+
+if defined foo {
+ put "foo is defined and has the value ";
+ say foo;
+}
+</pre>
+<h3>Synonyms</h3>
+<p>Each variable can have as many synonyms as wished. A synonym is another name to access the content of a specific variable. Here is an example of how to use is:</p>
+<pre>
+my foo = "foo";
+my bar = \foo;
+foo = "bar";
+
+# The synonym variable should now also set to "bar"
+assert "bar" == bar;
+</pre>
+<p>Synonyms can be used for all kind of identifiers. It's not limited to normal variables but can be also used for function and procedure names etc (more about functions and procedures later).</p>
+<pre>
+# Create a new procedure baz
+proc baz { say "I am baz"; }
+
+# Make a synonym baz, and undefine baz
+my bay = \baz;
+
+undef baz;
+
+# bay still has a reference of the original procedure baz
+bay; # this prints aut "I am baz"
+</pre>
+<p>The "syms" keyword gives you the total number of synonyms pointing to a specific value:</p>
+<pre>
+my foo = 1;
+say syms foo; # Prints 1
+
+my baz = \foo;
+say syms foo; # Prints 2
+say syms baz; # Prints 2
+
+undef baz;
+say syms foo; # Prints 1
+</pre>
+<h2>Statements and expressions</h2>
+<p>A Fype program is a list of statements. Each keyword, expression or function call is part of a statement. Each statement is ended with a semicolon. Example:</p>
+<pre>
+my bar = 3, foo = 1 + 2;
+say foo;
+exit foo - bar;
+</pre>
+<h3>Parenthesis</h3>
+<p>All parenthesis for function arguments are optional. They help to make the code better readable. They also help to force precedence of expressions.</p>
+<h3>Basic expressions</h3>
+<p>Any "any" value holding a string will be automatically converted to an integer value.</p>
+<pre>
+(any) &lt;any&gt; + &lt;any&gt;
+(any) &lt;any&gt; - &lt;any&gt;
+(any) &lt;any&gt; * &lt;any&gt;
+(any) &lt;any&gt; / &lt;any&gt;
+(integer) &lt;any&gt; == &lt;any&gt;
+(integer) &lt;any&gt; != &lt;any&gt;
+(integer) &lt;any&gt; &lt;= &lt;any&gt;
+(integer) &lt;any&gt; gt &lt;any&gt;
+(integer) &lt;any&gt; &lt;&gt; &lt;any&gt;
+(integer) &lt;any&gt; gt &lt;any&gt;
+(integer) not &lt;any&gt;
+</pre>
+<h3>Bitwise expressions</h3>
+<pre>
+(integer) &lt;any&gt; :&lt; &lt;any&gt;
+(integer) &lt;any&gt; :&gt; &lt;any&gt;
+(integer) &lt;any&gt; and &lt;any&gt;
+(integer) &lt;any&gt; or &lt;any&gt;
+(integer) &lt;any&gt; xor &lt;any&gt;
+</pre>
+<h3>Numeric expressions</h3>
+<pre>
+(number) neg &lt;number&gt;
+</pre>
+<p>... returns the negative value of "number":</p>
+<pre>
+(integer) no &lt;integer&gt;
+</pre>
+<p>... returns 1 if the argument is 0, otherwise it will return 0! If no argument is given, then 0 is returned!</p>
+<pre>
+(integer) yes &lt;integer&gt;
+</pre>
+<p>... always returns 1. The parameter is optional. Example:</p>
+<pre>
+# Prints out 1, because foo is not defined
+if yes { say no defined foo; }
+</pre>
+<h2>Control statements</h2>
+<p>Control statements available in Fype:</p>
+<pre>
+if &lt;expression&gt; { &lt;statements&gt; }
+</pre>
+<p>... runs the statements if the expression evaluates to a true value.</p>
+<pre>
+ifnot &lt;expression&gt; { &lt;statements&gt; }
+</pre>
+<p>... runs the statements if the expression evaluates to a false value.</p>
+<pre>
+while &lt;expression&gt; { &lt;statements&gt; }
+</pre>
+<p>... runs the statements as long as the expression evaluates to a true value.</p>
+<pre>
+until &lt;expression&gt; { &lt;statements&gt; }
+</pre>
+<p>... runs the statements as long as the expression evaluates to a false value.</p>
+<h2>Scopes</h2>
+<p>A new scope starts with an { and ends with an }. An exception is a procedure, which does not use its own scope (see later in this manual). Control statements and functions support scopes. The "scope" function prints out all available symbols at the current scope. Here is a small example:</p>
+<pre>
+my foo = 1;
+
+{
+ # Prints out 1
+ put defined foo;
+ {
+ my bar = 2;
+
+ # Prints out 1
+ put defined bar;
+
+ # Prints out all available symbols at this
+ # point to stdout. Those are: bar and foo
+ scope;
+ }
+
+ # Prints out 0
+ put defined bar;
+
+ my baz = 3;
+}
+
+# Prints out 0
+say defined bar;
+</pre>
+<p>Another example including an actual output:</p>
+<pre>
+./fype -e ’my global; func foo { my var4; func bar { my var2, var3; func baz { my var1; scope; } baz; } bar; } foo;’
+Scopes:
+Scope stack size: 3
+Global symbols:
+SYM_VARIABLE: global (id=00034, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
+SYM_FUNCTION: foo
+Local symbols:
+SYM_VARIABLE: var1 (id=00038, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
+1 level(s) up:
+SYM_VARIABLE: var2 (id=00036, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
+SYM_VARIABLE: var3 (id=00037, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
+SYM_FUNCTION: baz
+2 level(s) up:
+SYM_VARIABLE: var4 (id=00035, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
+SYM_FUNCTION: bar
+</pre>
+<h2>Definedness </h2>
+<pre>
+(integer) defined &lt;identifier&gt;
+</pre>
+<p>... returns 1 if "identifier" has been defined. Returns 0 otherwise.</p>
+<pre>
+(integer) undef &lt;identifier&gt;
+</pre>
+<p>... tries to undefine/delete the "identifier". Returns 1 if it succeeded, otherwise 0 is returned.</p>
+<h2>System </h2>
+<p>These are some system and interpreter specific built-in functions supported:</p>
+<pre>
+(void) end
+</pre>
+<p>... exits the program with the exit status of 0.</p>
+<pre>
+(void) exit &lt;integer&gt;
+</pre>
+<p>... exits the program with the specified exit status.</p>
+<pre>
+(integer) fork
+</pre>
+<p>... forks a subprocess. It returns 0 for the child process and the pid of the child process otherwise! Example:</p>
+<pre>
+my pid = fork;
+
+if pid {
+ put "I am the parent process; child has the pid ";
+ say pid;
+
+} ifnot pid {
+ say "I am the child process";
+}
+</pre>
+<p>To execute the garbage collector do:</p>
+<pre>
+(integer) gc
+</pre>
+<p>It returns the number of items freed! You may wonder why most of the time it will return a value of 0! Fype tries to free not needed memory ASAP. This may change in future versions in order to gain faster execution speed!</p>
+<h3>I/O </h3>
+<pre>
+(any) put &lt;any&gt;
+</pre>
+<p>... prints out the argument</p>
+<pre>
+(any) say &lt;any&gt;
+</pre>
+<p>is the same as put, but also includes an ending newline.</p>
+<pre>
+(void) ln
+</pre>
+<p>... just prints a newline.</p>
+<h2>Procedures and functions</h2>
+<h3>Procedures</h3>
+<p>A procedure can be defined with the "proc" keyword and deleted with the "undef" keyword. A procedure does not return any value and does not support parameter passing. It's using already defined variables (e.g. global variables). A procedure does not have its own namespace. It's using the calling namespace. It is possible to define new variables inside of a procedure in the current namespace.</p>
+<pre>
+proc foo {
+ say 1 + a * 3 + b;
+ my c = 6;
+}
+
+my a = 2, b = 4;
+
+foo; # Run the procedure. Print out "11\n"
+say c; # Print out "6\n";
+</pre>
+<h3>Nested procedures</h3>
+<p>It's possible to define procedures inside of procedures. Since procedures don't have its own scope, nested procedures will be available to the current scope as soon as the main procedure has run the first time. You may use the "defined" keyword in order to check if a procedure has been defined or not.</p>
+<pre>
+proc foo {
+ say "I am foo";
+
+ undef bar;
+ proc bar {
+ say "I am bar";
+ }
+}
+
+# Here bar would produce an error because
+# the proc is not yet defined!
+# bar;
+
+foo; # Here the procedure foo will define the procedure bar!
+bar; # Now the procedure bar is defined!
+foo; # Here the procedure foo will redefine bar again!
+</pre>
+<h3>Functions</h3>
+<p>A function can be defined with the "func" keyword and deleted with the "undef" keyword. Function do not yet return values and do not yet supports parameter passing. It's using local (lexical scoped) variables. If a certain variable does not exist, when It's using already defined variables (e.g. one scope above). </p>
+<pre>
+func foo {
+ say 1 + a * 3 + b;
+ my c = 6;
+}
+
+my a = 2, b = 4;
+
+foo; # Run the procedure. Print out "11\n"
+say c; # Will produce an error, because c is out of scoped!
+</pre>
+<h3>Nested functions</h3>
+<p>Nested functions work the same way the nested procedures work, with the exception that nested functions will not be available anymore after the function has been left!</p>
+<pre>
+func foo {
+ func bar {
+ say "Hello i am nested";
+ }
+
+ bar; # Calling nested
+}
+
+foo;
+bar; # Will produce an error, because bar is out of scope!
+</pre>
+<h2>Arrays</h2>
+<p>Some progress on arrays has been made too. The following example creates a multi dimensional array "foo". Its first element is the return value of the func which is "bar". The fourth value is a string ”3” converted to a double number. The last element is an anonymous array which itself contains another anonymous array as its last element:</p>
+<pre>
+func bar { say ”bar” }
+my foo = [bar, 1, 4/2, double ”3”, [”A”, [”BA”, ”BB”]]];
+say foo;
+</pre>
+<p>It produces the following output:</p>
+<pre>
+% ./fype arrays.fy
+bar
+01
+2
+3.000000
+A
+BA
+BB
+</pre>
+<h2>Fancy stuff</h2>
+<p>Fancy stuff like OOP or Unicode or threading is not planed. But fancy stuff like function pointers and closures may be considered.:) </p>
+<h2>May the source be with you</h2>
+<p>You can find all of this on the GitHub page. There is also an "examples" folders containing some Fype scripts!</p>
+<a class="textlink" href="https://github.com/snonux/fype">https://github.com/snonux/fype</a><br />
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>Perl Poetry</title>
<link href="gemini://buetow.org/gemfeed/2008-06-26-perl-poetry.gmi" />
<id>gemini://buetow.org/gemfeed/2008-06-26-perl-poetry.gmi</id>
<updated>2008-06-26T21:43:51+01:00</updated>
- <summary>Here are some Perl Poems I wrote. They don't do anything useful when you run them but they don't produce a compiler error either. They only exists for fun and demonstrate what you can do with Perl syntax.. .....to read on please visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>Here are some Perl Poems I wrote. They don't do anything useful when you run them but they don't produce a compiler error either. They only exists for fun and demonstrate what you can do with Perl syntax.. .....to read on please visit my site.</summary>
+ <content type="text/html">
+ <h1>Perl Poetry</h1>
+<pre>
+ '\|/' *
+-- * -----
+ /|\ ____
+ ' | ' {_ o^&gt; *
+ : -_ /)
+ : ( ( .-''`'.
+ . \ \ / \
+ . \ \ / \
+ \ `-' `'.
+ \ . ' / `.
+ \ ( \ ) ( .')
+ ,, t '. | / | (
+ '|``_/^\___ '| |`'-..-'| ( ()
+_~~|~/_|_|__/|~~~~~~~ | / ~~~~~ | | ~~~~~~~~
+ -_ |L[|]L|/ | |\ MJP ) )
+ ( |( / /|
+ ~~ ~ ~ ~~~~ | /\\ / /| |
+ || \\ _/ / | |
+ ~ ~ ~~~ _|| (_/ (___)_| |Nov291999
+ (__) (____)
+</pre>
+<p>Here are some Perl Poems I wrote. They don't do anything useful when you run them, but they don't produce a compiler error either. They only exist for fun and demonstrate what you can do with Perl syntax.</p>
+<p>Wikipedia: "Perl poetry is the practice of writing poems that can be compiled as legal Perl code, for example the piece known as Black Perl. Perl poetry is made possible by the large number of English words that are used in the Perl language. New poems are regularly submitted to the community at PerlMonks."</p>
+<a class="textlink" href="https://en.wikipedia.org/wiki/Perl">https://en.wikipedia.org/wiki/Perl</a><br />
+<h2>math.pl</h2>
+<pre>
+#!/usr/bin/perl
+
+# (C) 2006 by Paul C. Buetow (http://paul.buetow.org)
+
+goto library for study $math;
+BEGIN { s/earching/ books/
+and read $them, $at, $the } library:
+
+our $topics, cos and tan,
+require strict; import { of, tied $patience };
+
+do { int'egrate'; sub trade; };
+do { exp'onentize' and abs'olutize' };
+study and study and study and study;
+
+foreach $topic ({of, math}) {
+you, m/ay /go, to, limits }
+
+do { not qw/erk / unless $success
+and m/ove /o;$n and study };
+
+do { int'egrate'; sub trade; };
+do { exp'onentize' and abs'olutize' };
+study and study and study and study;
+
+grep /all/, exp'onents' and cos'inuses';
+/seek results/ for @all, log'4rithms';
+
+'you' =~ m/ay /go, not home
+unless each %book ne#ars
+$completion;
+
+do { int'egrate'; sub trade; };
+do { exp'onentize' and abs'olutize' };
+
+#at
+home: //ig,'nore', time and sleep $very =~ s/tr/on/g;
+__END__
+
+</pre>
+<h2>christmas.pl</h2>
+<pre>
+#!/usr/bin/perl
+
+# (C) 2006 by Paul C. Buetow (http://paul.buetow.org)
+
+Christmas:{time;#!!!
+
+Children: do tell $wishes;
+
+Santa: for $each (@children) {
+BEGIN { read $each, $their, wishes and study them; use Memoize#ing
+
+} use constant gift, 'wrapping';
+package Gifts; pack $each, gift and bless $each and goto deliver
+or do import if not local $available,!!! HO, HO, HO;
+
+redo Santa, pipe $gifts, to_childs;
+redo Santa and do return if last one, is, delivered;
+
+deliver: gift and require diagnostics if our $gifts ,not break;
+do{ use NEXT; time; tied $gifts} if broken and dump the, broken, ones;
+The_children: sleep and wait for (each %gift) and try { to =&gt; untie $gifts };
+
+redo Santa, pipe $gifts, to_childs;
+redo Santa and do return if last one, is, delivered;
+
+The_christmas_tree: formline s/ /childrens/, $gifts;
+alarm and warn if not exists $Christmas{ tree}, @t, $ENV{HOME};
+write &lt;&lt;EMail
+ to the parents to buy a new christmas tree!!!!111
+ and send the
+EMail
+;wait and redo deliver until defined local $tree;
+
+redo Santa, pipe $gifts, to_childs;
+redo Santa and do return if last one, is, delivered ;}
+
+END {} our $mission and do sleep until next Christmas ;}
+
+__END__
+
+This is perl, v5.8.8 built for i386-freebsd-64int
+</pre>
+<h2>shopping.pl</h2>
+<pre>
+#!/usr/bin/perl
+
+# (C) 2007 by Paul C. Buetow (http://paul.buetow.org)
+
+BEGIN{} goto mall for $shopping;
+
+m/y/; mall: seek$s, cool products(), { to =&gt; $sell };
+for $their (@business) { to:; earn:; a:; lot:; of:; money: }
+
+do not goto home and exit mall if exists $new{product};
+foreach $of (q(uality rich products)){} package products;
+
+our $news; do tell cool products() and do{ sub#tract
+cool{ $products and shift @the, @bad, @ones;
+
+do bless [q(uality)], $products
+and return not undef $stuff if not (local $available) }};
+
+do { study and study and study for cool products() }
+and do { seek $all, cool products(), { to =&gt; $buy } };
+
+do { write $them, $down } and do { order: foreach (@case) { package s } };
+goto home if not exists $more{money} or die q(uerying) ;for( @money){};
+
+at:;home: do { END{} and:; rest:; a:; bit: exit $shopping }
+and sleep until unpack$ing, cool products();
+
+__END__
+This is perl, v5.8.8 built for i386-freebsd-64int
+</pre>
+<h2>More...</h2>
+<p>Did you like what you saw? Have a look at Github to see my other poems too:</p>
+<a class="textlink" href="https://github.com/snonux/perl-poetry">https://github.com/snonux/perl-poetry</a><br />
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
</feed>
diff --git a/content/html/gemfeed/atom.xml b/content/html/gemfeed/atom.xml
index 905ebb2e..23974c1b 100644
--- a/content/html/gemfeed/atom.xml
+++ b/content/html/gemfeed/atom.xml
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
- <updated>2021-05-05T13:00:59+01:00</updated>
+ <updated>2021-05-06T09:41:34+01:00</updated>
<title>buetow.org feed</title>
<subtitle>Having fun with computers!</subtitle>
<link href="https://buetow.org/gemfeed/atom.xml" rel="self" />
@@ -11,87 +11,1076 @@
<link href="https://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace.html" />
<id>https://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace.html</id>
<updated>2021-04-24T19:28:41+01:00</updated>
- <summary>Have you reached this article already via Gemini? You need a special client for that, web browsers such as Firefox, Chrome, Safari etc. don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is: ... to read on visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>Have you reached this article already via Gemini? You need a special client for that, web browsers such as Firefox, Chrome, Safari etc. don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is: ... to read on visit my site.</summary>
+ <content type="text/html">
+ <h1>Welcome to the Geminispace</h1>
+<p>Have you reached this article already via Gemini? You need a special client for that, web browsers such as Firefox, Chrome, Safari etc. don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is:</p>
+<a class="textlink" href="https://buetow.org">https://buetow.org</a><br />
+<p>If you however still use HTTP then you are just surfing the fallback HTML version of this capsule. In that case I suggest reading on what this is all about :-).</p>
+<pre>
+
+ /\
+ / \
+ | |
+ |NASA|
+ | |
+ | |
+ | |
+ ' `
+ |Gemini|
+ | |
+ |______|
+ '-`'-` .
+ / . \'\ . .'
+ ''( .'\.' ' .;'
+'.;.;' ;'.;' ..;;' AsH
+
+</pre>
+<h2>Motivation</h2>
+<h3>My urge to revamp my personal website</h3>
+<p>For some time I had to urge to revamp my personal website. Not to update the technology and the design of it but to update all the content (+ keep it current) and also to start a small tech blog again. So unconsciously I started to search for a good platform and/or software to do all of that in a KISS (keep it simple & stupid) way.</p>
+<h3>My still great Laptop running hot</h3>
+<p>Earlier this year (2021) I noticed that my 6 year old but still great Laptop started to become hot and slowed down while surfing the web. Also, the Laptop's fan became quite noisy. This is all due to the additional bloat such as JavaScript, excessive use of CSS, tracking cookies+pixels, ads and so on there was on the website. </p>
+<p>All what I wanted was to read an interesting article but after a big advertising pop-up banner appeared and made everything worse I gave up and closed the browser tab.</p>
+<h2>Discovering the Gemini internet protocol</h2>
+<p>Around the same time I discovered a relatively new more lightweight protocol named Gemini which does not support all these CPU intensive features like HTML, JavaScript and CSS do. Also, tracking and ads is not supported by the Gemini protocol.</p>
+<p>The "downside" is that due to the limited capabilities of the Gemini protocol all sites look very old and spartan. But that is not really a downside, that is in fact a design choice people made. It is up to the client software how your capsule looks. For example, you could use a graphical client with nice font renderings and colors to improve the appearance. Or you could just use a very minimalistic command line black-and-white Gemini client. It's your (the user's) choice.</p>
+<i>Screenshot Amfora Gemini terminal client surfing this site:</i><a href="https://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png"><img alt="Screenshot Amfora Gemini terminal client surfing this site" title="Screenshot Amfora Gemini terminal client surfing this site" src="https://buetow.org/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png" /></a><br />
+<p>Why is there a need for a new protocol? As the modern web is a superset of Gemini, can't we just use simple HTML 1.0? That's a good and valid question. It is not a technical problem but a human problem. We tend to abuse the features once they are available. You can be sure that things stay simple and efficient as long as you are using the Gemini protocol. On the other hand you can't force every website in the modern web to only create plain and simple looking HTML pages.</p>
+<h2>My own Gemini capsule</h2>
+<p>As it is very easy to set up and maintain your own Gemini capsule (Gemini server + content composed via the Gemtext markup language) I decided to create my own. What I really like about Gemini is that I can just use my favorite text editor and get typing. I don't need to worry about the style and design of the presence and I also don't have to test anything in ten different web browsers. I can only focus on the content! As a matter of fact, I am using the Vim editor + it's spellchecker + auto word completion functionality to write this. </p>
+<h2>Advantages summarised</h2>
+<ul>
+<li>Supports an alternative to the modern bloated web</li>
+<li>Easy to operate and easy to write content</li>
+<li>No need to worry about various web browser compatibilities</li>
+<li>It's the client's responsibility how the content is designed+presented</li>
+<li>Lightweight (although not as lightweight as the Gopher protocol)</li>
+<li>Supports privacy (no cookies, no request header fingerprinting, TLS encryption)</li>
+<li>Fun to play with (it's a bit geeky yes, but a lot of fun!)</li>
+</ul>
+<h2>Dive into deep Gemini space</h2>
+<p>Check out one of the following links for more information about Gemini. For example, you will find a FAQ which explains why the protocol is named "Gemini". Many Gemini capsules are dual hosted via Gemini and HTTP(S), so that people new to Gemini can sneak peek the content with a normal web browser. As a matter of fact, some people go as far as tri-hosting all their content via HTTP(S), Gemini and Gopher.</p>
+<a class="textlink" href="https://gemini.circumlunar.space">https://gemini.circumlunar.space</a><br />
+<a class="textlink" href="https://gemini.circumlunar.space">https://gemini.circumlunar.space</a><br />
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>DTail - The distributed log tail program</title>
<link href="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.html" />
<id>https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.html</id>
<updated>2021-04-22T19:28:41+01:00</updated>
- <summary>This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal Gemini capsule too. ...to read on visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal Gemini capsule too. ...to read on visit my site.</summary>
+ <content type="text/html">
+ <h1>DTail - The distributed log tail program</h1>
+<i>DTail logo image:</i><a href="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail-logo.png"><img alt="DTail logo image" title="DTail logo image" src="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail-logo.png" /></a><br />
+<p>This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal Gemini capsule too.</p>
+<a class="textlink" href="https://medium.com/mimecast-engineering/dtail-the-distributed-log-tail-program-79b8087904bb">Original Mimecast Engineering Blog post at Medium</a><br />
+<p>Running a large cloud-based service requires monitoring the state of huge numbers of machines, a task for which many standard UNIX tools were not really designed. In this post, I will describe a simple program, DTail, that Mimecast has built and released as Open-Source, which enables us to monitor log files of many servers at once without the costly overhead of a full-blown log management system.</p>
+<p>At Mimecast, we run over 10 thousand server boxes. Most of them host multiple microservices and each of them produces log files. Even with the use of time series databases and monitoring systems, raw application logs are still an important source of information when it comes to analysing, debugging, and troubleshooting services.</p>
+<p>Every engineer familiar with UNIX or a UNIX-like platform (e.g., Linux) is well aware of tail, a command-line program for displaying a text file content on the terminal which is also especially useful for following application or system log files with tail -f logfile.</p>
+<p>Think of DTail as a distributed version of the tail program which is very useful when you have a distributed application running on many servers. DTail is an Open-Source, cross-platform, fairly easy to use, support and maintain log file analysis & statistics gathering tool designed for Engineers and Systems Administrators. It is programmed in Google Go.</p>
+<h2>A Mimecast Pet Project</h2>
+<p>DTail got its inspiration from public domain tools available already in this area but it is a blue sky from-scratch development which was first presented at Mimecast’s annual internal Pet Project competition (awarded with a Bronze prize). It has gained popularity since and is one of the most widely deployed DevOps tools at Mimecast (reaching nearly 10k server installations) and many engineers use it on a regular basis. The Open-Source version of DTail is available at:</p>
+<a class="textlink" href="https://dtail.dev">https://dtail.dev</a><br />
+<p>Try it out — We would love any feedback. But first, read on…</p>
+<h2>Differentiating from log management systems</h2>
+<p>Why not just use a full-blown log management system? There are various Open-Source and commercial log management solutions available on the market you could choose from (e.g. the ELK stack). Most of them store the logs in a centralized location and are fairly complex to set up and operate. Possibly they are also pretty expensive to operate if you have to buy dedicated hardware (or pay fees to your cloud provider) and have to hire support staff for it.</p>
+<p>DTail does not aim to replace any of the log management tools already available but is rather an additional tool crafted especially for ad-hoc debugging and troubleshooting purposes. DTail is cheap to operate as it does not require any dedicated hardware for log storage as it operates directly on the source of the logs. It means that there is a DTail server installed on all server boxes producing logs. This decentralized comes with the direct advantages that there is no introduced delay because the logs are not shipped to a central log storage device. The reduced complexity also makes it more robust against outages. You won’t be able to troubleshoot your distributed application very well if the log management infrastructure isn’t working either.</p>
+<i>DTail sample session animated gif:</i><a href="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif"><img alt="DTail sample session animated gif" title="DTail sample session animated gif" src="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif" /></a><br />
+<p>As a downside, you won’t be able to access any logs with DTail when the server is down. Furthermore, a server can store logs only up to a certain capacity as disks will fill up. For the purpose of ad-hoc debugging, these are not typically issues. Usually, it’s the application you want to debug and not the server. And disk space is rarely an issue for bare metal and VM-based systems these days, with sufficient space for several weeks’ worth of log storage being available. DTail also supports reading compressed logs. The currently supported compression algorithms are gzip and zstd.</p>
+<h2>Combining simplicity, security and efficiency</h2>
+<p>DTail also has a client component that connects to multiple servers concurrently for log files (or any other text files).</p>
+<p>The DTail client interacts with a DTail server on port TCP/2222 via SSH protocol and does not interact in any way with the system’s SSH server (e.g., OpenSSH Server) which might be running at port TCP/22 already. As a matter of fact, you don’t need a regular SSH server running for DTail at all. There is no support for interactive login shells at TCP/2222 either, as by design that port can only be used for text data streaming. The SSH protocol is used for the public/private key infrastructure and transport encryption only and DTail implements its own protocol on top of SSH for the features provided. There is no need to set up or buy any additional TLS certificates. The port 2222 can be easily reconfigured if you preferred to use a different one.</p>
+<p>The DTail server, which is a single static binary, will not fork an external process. This means that all features are implemented in native Go code (exception: Linux ACL support is implemented in C, but it must be enabled explicitly on compile time) and therefore helping to make it robust, secure, efficient, and easy to deploy. A single client, running on a standard Laptop, can connect to thousands of servers concurrently while still maintaining a small resource footprint.</p>
+<p>Recent log files are very likely still in the file system caches on the servers. Therefore, there tends to be a minimal I/O overhead involved.</p>
+<h2>The DTail family of commands</h2>
+<p>Following the UNIX philosophy, DTail includes multiple command-line commands each of them for a different purpose:</p>
+<ul>
+<li>dserver: The DTail server, the only binary required to be installed on the servers involved.</li>
+<li>dtail: The distributed log tail client for following log files.</li>
+<li>dcat: The distributed cat client for concatenating and displaying text files.</li>
+<li>dgrep: The distributed grep client for searching text files for a regular expression pattern.</li>
+<li>dmap: The distributed map-reduce client for aggregating stats from log files.</li>
+</ul>
+<i>DGrep sample session animated gif:</i><a href="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif"><img alt="DGrep sample session animated gif" title="DGrep sample session animated gif" src="https://buetow.org/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif" /></a><br />
+<h2>Usage example</h2>
+<p>The use of these commands is almost self-explanatory for a person already used to the standard command line in Unix systems. One of the main goals is to make DTail easy to use. A tool that is too complicated to use under high-pressure scenarios (e.g., during an incident) can be quite detrimental.</p>
+<p>The basic idea is to start one of the clients from the command line and provide a list of servers to connect to with –servers. You also must provide a path of remote (log) files via –files. If you want to process multiple files per server, you could either provide a comma-separated list of file paths or make use of file system globbing (or a combination of both).</p>
+<p>The following example would connect to all DTail servers listed in the serverlist.txt, follow all files with the ending .log and filter for lines containing the string error. You can specify any Go compatible regular expression. In this example we add the case-insensitive flag to the regex:</p>
+<pre>
+dtail –servers serverlist.txt –files ‘/var/log/*.log’ –regex ‘(?i:error)’
+</pre>
+<p>You usually want to specify a regular expression as a client argument. This will mean that responses are pre-filtered for all matching lines on the server-side and thus sending back only the relevant lines to the client. If your logs are growing very rapidly and the regex is not specific enough there might be the chance that your client is not fast enough to keep up processing all of the responses. This could be due to a network bottleneck or just as simple as a slow terminal emulator displaying the log lines on the client-side.</p>
+<p>A green 100 in the client output before each log line received from the server always indicates that there were no such problems and 100% of all log lines could be displayed on your terminal (have a look at the animated Gifs in this post). If the percentage falls below 100 it means that some of the channels used by the servers to send data to the client are congested and lines were dropped. In this case, the color will change from green to red. The user then could decide to run the same query but with a more specific regex.</p>
+<p>You could also provide a comma-separated list of servers as opposed to a text file. There are many more options you could use. The ones listed here are just the very basic ones. There are more instructions and usage examples on the GitHub page. Also, you can study even more of the available options via the –help switch (some real treasures might be hidden there).</p>
+<h2>Fitting it in</h2>
+<p>DTail integrates nicely into the user management of existing infrastructure. It follows normal system permissions and does not open new “holes” on the server which helps to keep security departments happy. The user would not have more or less file read permissions than he would have via a regular SSH login shell. There is a full SSH key, traditional UNIX permissions, and Linux ACL support. There is also a very low resource footprint involved. On average for tailing and searching log files less than 100MB RAM and less than a quarter of a CPU core per participating server are required. Complex map-reduce queries on big data sets will require more resources accordingly.</p>
+<h2>Advanced features</h2>
+<p>The features listed here are out of the scope of this blog post but are worthwhile to mention:</p>
+<ul>
+<li>Distributed map-reduce queries on stats provided in log files with dmap. dmap comes with its own SQL-like aggregation query language.</li>
+<li>Stats streaming with continuous map-reduce queries. The difference to normal queries is that the stats are aggregated over a specified interval only on the newly written log lines. Thus, giving a de-facto live stat view for each interval.</li>
+<li>Server-side scheduled queries on log files. The queries are configured in the DTail server configuration file and scheduled at certain time intervals. Results are written to CSV files. This is useful for generating daily stats from the log files without the need for an interactive client.</li>
+<li>Server-side stats streaming with continuous map-reduce queries. This for example can be used to periodically generate stats from the logs at a configured interval, e.g., log error counts by the minute. These then can be sent to a time-series database (e.g., Graphite) and then plotted in a Grafana dashboard.</li>
+<li>Support for custom extensions. E.g., for different server discovery methods (so you don’t have to rely on plain server lists) and log file formats (so that map-reduce queries can parse more stats from the logs).</li>
+</ul>
+<h2>For the future</h2>
+<p>There are various features we want to see in the future.</p>
+<ul>
+<li>A spartan mode, not printing out any extra information but the raw remote log files would be a nice feature to have. This will make it easier to post-process the data produced by the DTail client with common UNIX tools. (To some degree this is possible already, just disable the ANSI terminal color output of the client with -noColors and pipe the output to another program).</li>
+<li>Tempting would be implementing the dgoawk command, a distributed version of the AWK programming language purely implemented in Go, for advanced text data stream processing capabilities. There are 3rd party libraries available implementing AWK in pure Go which could be used.</li>
+<li>A more complex change would be the support of federated queries. You can connect to thousands of servers from a single client running on a laptop. But does it scale to 100k of servers? Some of the servers could be used as middleware for connecting to even more servers.</li>
+<li>Another aspect is to extend the documentation. Especially the advanced features such as map-reduce query language and how to configure the server-side queries currently do require more documentation. For now, you can read the code, sample config files or just ask the author for that! But this will be certainly addressed in the future.</li>
+</ul>
+<h2>Open Source</h2>
+<p>Mimecast highly encourages you to have a look at DTail and submit an issue for any features you would like to see. Have you found a bug? Maybe you just have a question or comment? If you want to go a step further: We would also love to see pull requests for any features or improvements. Either way, if in doubt just contact us via the DTail GitHub page.</p>
+<a class="textlink" href="https://dtail.dev">https://dtail.dev</a><br />
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>Methods in C</title>
<link href="https://buetow.org/gemfeed/2016-11-20-methods-in-c.html" />
<id>https://buetow.org/gemfeed/2016-11-20-methods-in-c.html</id>
<updated>2016-11-20T18:36:51+01:00</updated>
- <summary>You can do some sort of object oriented programming in the C Programming Language. However, that is very limited. But also very easy and straight forward to use.. .....to read on please visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>You can do some sort of object oriented programming in the C Programming Language. However, that is very limited. But also very easy and straight forward to use.. .....to read on please visit my site.</summary>
+ <content type="text/html">
+ <h1>Methods in C</h1>
+<p>You can do some sort of object oriented programming in the C Programming Language. However, that is very limited. But also very easy and straight forward to use.</p>
+<h2>Example</h2>
+<p>Lets have a look at the following sample program. Basically all you have to do is to add a function pointer such as "calculate" to the definition of struct "something_s". Later, during the struct initialization, assign a function address to that function pointer:</p>
+<pre>
+#include &lt;stdio.h&gt;
+
+typedef struct {
+ double (*calculate)(const double, const double);
+ char *name;
+} something_s;
+
+double multiplication(const double a, const double b) {
+ return a * b;
+}
+
+double division(const double a, const double b) {
+ return a / b;
+}
+
+int main(void) {
+ something_s mult = (something_s) {
+ .calculate = multiplication,
+ .name = "Multiplication"
+ };
+
+ something_s div = (something_s) {
+ .calculate = division,
+ .name = "Division"
+ };
+
+ const double a = 3, b = 2;
+
+ printf("%s(%f, %f) =&gt; %f\n", mult.name, a, b, mult.calculate(a,b));
+ printf("%s(%f, %f) =&gt; %f\n", div.name, a, b, div.calculate(a,b));
+}
+</pre>
+<p>As you can see you can call the function (pointed by the function pointer) the same way as in C++ or Java via:</p>
+<pre>
+printf("%s(%f, %f) =&gt; %f\n", mult.name, a, b, mult.calculate(a,b));
+printf("%s(%f, %f) =&gt; %f\n", div.name, a, b, div.calculate(a,b));
+</pre>
+<p>However, that's just syntactic sugar for:</p>
+<pre>
+printf("%s(%f, %f) =&gt; %f\n", mult.name, a, b, (*mult.calculate)(a,b));
+printf("%s(%f, %f) =&gt; %f\n", div.name, a, b, (*div.calculate)(a,b));
+</pre>
+<p>Output:</p>
+<pre>
+pbuetow ~/git/blog/source [38268]% gcc methods-in-c.c -o methods-in-c
+pbuetow ~/git/blog/source [38269]% ./methods-in-c
+Multiplication(3.000000, 2.000000) =&gt; 6.000000
+Division(3.000000, 2.000000) =&gt; 1.500000
+</pre>
+<p>Not complicated at all, but nice to know and helps to make the code easier to read!</p>
+<h2>The flaw</h2>
+<p>That's actually not really how it works in object oriented languages such as Java and C++. The method call in this example is not really a method call as "mult" and "div" in this example are not "message receivers". What I mean by that is that the functions can not access the state of the "mult" and "div" struct objects. In C you would need to do something like this instead if you wanted to access the state of "mult" from within the calculate function, you would have to pass it as an argument:</p>
+<pre>
+mult.calculate(mult,a,b));
+</pre>
+<p>How to overcome this? You need to take it further...</p>
+<h2>Taking it further</h2>
+<p>If you want to take it further type "Object-Oriented Programming with ANSI-C" into your favorite internet search engine, you will find some crazy stuff. Some go as far as writing a C preprocessor in AWK, which takes some object oriented pseudo-C and transforms it to plain C so that the C compiler can compile it to machine code. This is actually similar to how the C++ language had its origins.</p>
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>Spinning up my own authoritative DNS servers</title>
<link href="https://buetow.org/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.html" />
<id>https://buetow.org/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.html</id>
<updated>2016-05-22T18:59:01+01:00</updated>
- <summary>Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains 'buetow.org' and 'buetow.zone'. My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now I am making use of that option.. .....to read on please visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains 'buetow.org' and 'buetow.zone'. My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now I am making use of that option.. .....to read on please visit my site.</summary>
+ <content type="text/html">
+ <h1>Spinning up my own authoritative DNS servers</h1>
+<h2>Background</h2>
+<p>Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains "buetow.org" and "buetow.zone". My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now, I am making use of that option.</p>
+<a class="textlink" href="http://www.schlundtech.de">Schlund Technologies</a><br />
+<h2>All FreeBSD Jails</h2>
+<p>In order to set up my authoritative DNS servers I installed a FreeBSD Jail dedicated for DNS with Puppet on my root machine as follows:</p>
+<pre>
+include freebsd
+
+freebsd::ipalias { '2a01:4f8:120:30e8::14':
+ ensure =&gt; up,
+ proto =&gt; 'inet6',
+ preflen =&gt; '64',
+ interface =&gt; 're0',
+ aliasnum =&gt; '5',
+}
+
+include jail::freebsd
+
+class { 'jail':
+ ensure =&gt; present,
+ jails_config =&gt; {
+ dns =&gt; {
+ '_ensure' =&gt; present,
+ '_type' =&gt; 'freebsd',
+ '_mirror' =&gt; 'ftp://ftp.de.freebsd.org',
+ '_remote_path' =&gt; 'FreeBSD/releases/amd64/10.1-RELEASE',
+ '_dists' =&gt; [ 'base.txz', 'doc.txz', ],
+ '_ensure_directories' =&gt; [ '/opt', '/opt/enc' ],
+ 'host.hostname' =&gt; "'dns.ian.buetow.org'",
+ 'ip4.addr' =&gt; '192.168.0.15',
+ 'ip6.addr' =&gt; '2a01:4f8:120:30e8::15',
+ },
+ .
+ .
+ }
+}
+</pre>
+<h2>PF firewall</h2>
+<p>Please note that "dns.ian.buetow.org" is just the Jail name of the master DNS server (and "caprica.ian.buetow.org" the name of the Jail for the slave DNS server) and that I am using the DNS names "dns1.buetow.org" (master) and "dns2.buetow.org" (slave) for the actual service names (these are the DNS servers visible to the public). Please also note that the IPv4 address is an internal one. I have a PF to use NAT and PAT. The DNS ports are being forwarded (TCP and UDP) to that Jail. By default, all ports are blocked, so I am adding an exception rule for the IPv6 address as well. These are the PF rules in use:</p>
+<pre>
+% cat /etc/pf.conf
+.
+.
+# dns.ian.buetow.org
+rdr pass on re0 proto tcp from any to $pub_ip port {53} -&gt; 192.168.0.15
+rdr pass on re0 proto udp from any to $pub_ip port {53} -&gt; 192.168.0.15
+pass in on re0 inet6 proto tcp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state
+pass in on re0 inet6 proto udp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state
+.
+.
+</pre>
+<h2>Puppet managed BIND zone files</h2>
+<p>In "manifests/dns.pp" (the Puppet manifest for the Master DNS Jail itself) I configured the BIND DNS server this way:</p>
+<pre>
+class { 'bind_freebsd':
+ config =&gt; "puppet:///files/bind/named.${::hostname}.conf",
+ dynamic_config =&gt; "puppet:///files/bind/dynamic.${::hostname}",
+}
+</pre>
+<p>The Puppet module is actually a pretty simple one. It installs the file "/usr/local/etc/named/named.conf" and it populates the "/usr/local/etc/named/dynamicdb" directory with all my zone files.</p>
+<p>Once (Puppet-) applied inside of the Jail I get this:</p>
+<pre>
+paul uranus:~/git/blog/source [4268]% ssh admin@dns1.buetow.org.buetow.org pgrep -lf named
+60748 /usr/local/sbin/named -u bind -c /usr/local/etc/namedb/named.conf
+paul uranus:~/git/blog/source [4269]% ssh admin@dns1.buetow.org.buetow.org tail -n 13 /usr/local/etc/namedb/named.conf
+zone "buetow.org" {
+ type master;
+ notify yes;
+ allow-update { key "buetoworgkey"; };
+ file "/usr/local/etc/namedb/dynamic/buetow.org";
+};
+
+zone "buetow.zone" {
+ type master;
+ notify yes;
+ allow-update { key "buetoworgkey"; };
+ file "/usr/local/etc/namedb/dynamic/buetow.zone";
+};
+paul uranus:~/git/blog/source [4277]% ssh admin@dns1.buetow.org.buetow.org cat /usr/local/etc/namedb/dynamic/buetow.org
+$TTL 3600
+@ IN SOA dns1.buetow.org. domains.buetow.org. (
+ 25 ; Serial
+ 604800 ; Refresh
+ 86400 ; Retry
+ 2419200 ; Expire
+ 604800 ) ; Negative Cache TTL
+; Infrastructure domains
+@ IN NS dns1
+@ IN NS dns2
+* 300 IN CNAME web.ian
+buetow.org. 86400 IN A 78.46.80.70
+buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:11
+buetow.org. 86400 IN MX 10 mail.ian
+dns1 86400 IN A 78.46.80.70
+dns1 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:15
+dns2 86400 IN A 164.177.171.32
+dns2 86400 IN AAAA 2a03:2500:1:6:20::
+.
+.
+.
+.
+</pre>
+<p>That is my master DNS server. My slave DNS server runs in another Jail on another bare metal machine. Everything is set up similar to the master DNS server. However, that server is located in a different DC and in different IP subnets. The only difference is the "named.conf". It's configured to be a slave and that means that the "dynamicdb" gets populated by BIND itself while doing zone transfers from the master.</p>
+<pre>
+paul uranus:~/git/blog/source [4279]% ssh admin@dns2.buetow.org tail -n 11 /usr/local/etc/namedb/named.conf
+zone "buetow.org" {
+ type slave;
+ masters { 78.46.80.70; };
+ file "/usr/local/etc/namedb/dynamic/buetow.org";
+};
+
+zone "buetow.zone" {
+ type slave;
+ masters { 78.46.80.70; };
+ file "/usr/local/etc/namedb/dynamic/buetow.zone";
+};
+</pre>
+<h2>The end result</h2>
+<p>The end result looks like this now:</p>
+<pre>
+% dig -t ns buetow.org
+; &lt;&lt;&gt;&gt; DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 &lt;&lt;&gt;&gt; -t ns buetow.org
+;; global options: +cmd
+;; Got answer:
+;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 37883
+;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
+
+;; OPT PSEUDOSECTION:
+; EDNS: version: 0, flags:; udp: 512
+;; QUESTION SECTION:
+;buetow.org. IN NS
+
+;; ANSWER SECTION:
+buetow.org. 600 IN NS dns2.buetow.org.
+buetow.org. 600 IN NS dns1.buetow.org.
+
+;; Query time: 41 msec
+;; SERVER: 192.168.1.254#53(192.168.1.254)
+;; WHEN: Sun May 22 11:34:11 BST 2016
+;; MSG SIZE rcvd: 77
+
+% dig -t any buetow.org @dns1.buetow.org
+; &lt;&lt;&gt;&gt; DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 &lt;&lt;&gt;&gt; -t any buetow.org @dns1.buetow.org
+;; global options: +cmd
+;; Got answer:
+;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 49876
+;; flags: qr aa rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 7
+
+;; OPT PSEUDOSECTION:
+; EDNS: version: 0, flags:; udp: 4096
+;; QUESTION SECTION:
+;buetow.org. IN ANY
+
+;; ANSWER SECTION:
+buetow.org. 86400 IN A 78.46.80.70
+buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::11
+buetow.org. 86400 IN MX 10 mail.ian.buetow.org.
+buetow.org. 3600 IN SOA dns1.buetow.org. domains.buetow.org. 25 604800 86400 2419200 604800
+buetow.org. 3600 IN NS dns2.buetow.org.
+buetow.org. 3600 IN NS dns1.buetow.org.
+
+;; ADDITIONAL SECTION:
+mail.ian.buetow.org. 86400 IN A 78.46.80.70
+dns1.buetow.org. 86400 IN A 78.46.80.70
+dns2.buetow.org. 86400 IN A 164.177.171.32
+mail.ian.buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::12
+dns1.buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::15
+dns2.buetow.org. 86400 IN AAAA 2a03:2500:1:6:20::
+
+;; Query time: 42 msec
+;; SERVER: 78.46.80.70#53(78.46.80.70)
+;; WHEN: Sun May 22 11:34:41 BST 2016
+;; MSG SIZE rcvd: 322
+</pre>
+<h2>Monitoring</h2>
+<p>For monitoring I am using Icinga2 (I am operating two Icinga2 instances in two different DCs). I may have to post another blog article about Icinga2 but to get the idea these were the snippets added to my Icinga2 configuration:</p>
+<pre>
+apply Service "dig" {
+ import "generic-service"
+
+ check_command = "dig"
+ vars.dig_lookup = "buetow.org"
+ vars.timeout = 30
+
+ assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org"
+}
+
+apply Service "dig6" {
+ import "generic-service"
+
+ check_command = "dig"
+ vars.dig_lookup = "buetow.org"
+ vars.timeout = 30
+ vars.check_ipv6 = true
+
+ assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org"
+}
+</pre>
+<h2>DNS update workflow</h2>
+<p>Whenever I have to change a DNS entry all have to do is:</p>
+<ul>
+<li>Git clone or update the Puppet repository</li>
+<li>Update/commit and push the zone file (e.g. "buetow.org")</li>
+<li>Wait for Puppet. Puppet will deploy that updated zone file. And it will reload the BIND server.</li>
+<li>The BIND server will notify all slave DNS servers (at the moment only one). And it will transfer the new version of the zone.</li>
+</ul>
+<p>That's much more comfortable now than manually clicking at some web UIs at Schlund Technologies.</p>
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>Offsite backup with ZFS (Part 2)</title>
<link href="https://buetow.org/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.html" />
<id>https://buetow.org/gemfeed/2016-04-16-offsite-backup-with-zfs-part2.html</id>
<updated>2016-04-16T22:43:42+01:00</updated>
- <summary>I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of different brands. One drive is kept at the secret location. The other one is kept at home right next to my HP MicroServer. ...to read on visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of different brands. One drive is kept at the secret location. The other one is kept at home right next to my HP MicroServer. ...to read on visit my site.</summary>
+ <content type="text/html">
+ <h1>Offsite backup with ZFS (Part 2)</h1>
+<pre>
+ ________________
+|# : : #|
+| : ZFS/GELI : |________________
+| : Offsite : |# : : #|
+| : Backup 1 : | : ZFS/GELI : |
+| :___________: | : Offsite : |
+| _________ | : Backup 2 : |
+| | __ | | :___________: |
+| || | | | _________ |
+\____||__|_____|_| | __ | |
+ | || | | |
+ \____||__|_____|__|
+</pre>
+<a class="textlink" href="https://buetow.org/gemfeed/2016-04-03-offsite-backup-with-zfs.html">Read the first part before reading any furter here...</a><br />
+<p>I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of different brands. One drive is kept at the secret location. The other one is kept at home right next to my HP MicroServer.</p>
+<p>Whenever I am updating offsite backup, I am doing it to the drive which is kept locally. Afterwards I bring it to the secret location and swap the drives and bring the other one back home. This ensures that I will always have an offiste backup available at a different location than my home - even while updating one copy of it.</p>
+<p>Furthermore, I added scrubbing (*zpool scrub...*) to the script. It ensures that the file system is consistent and that there are no bad blocks on the disk and the file system. To increase the reliability I also run a *zfs set copies=2 zroot*. That setting is also synchronized to the offsite ZFS pool. ZFS stores every data block to disk twice now. Yes, it consumes twice as much disk space but it makes it better fault tolerant against hardware errors (e.g. only individual disk sectors going bad). </p>
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>Offsite backup with ZFS</title>
<link href="https://buetow.org/gemfeed/2016-04-03-offsite-backup-with-zfs.html" />
<id>https://buetow.org/gemfeed/2016-04-03-offsite-backup-with-zfs.html</id>
<updated>2016-04-03T22:43:42+01:00</updated>
- <summary>When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....). ...to read on visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....). ...to read on visit my site.</summary>
+ <content type="text/html">
+ <h1>Offsite backup with ZFS</h1>
+<pre>
+ ________________
+|# : : #|
+| : ZFS/GELI : |
+| : Offsite : |
+| : Backup : |
+| :___________: |
+| _________ |
+| | __ | |
+| || | | |
+\____||__|_____|__|
+</pre>
+<h2>Please don't lose all my pictures again!</h2>
+<p>When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job but also due to a personal experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ....).</p>
+<p>A little about my personal infrastructure: I am running my own (mostly FreeBSD based) root servers (across several countries: Two in Germany, one in Canada, one in Bulgaria) which store all my online data (E-Mail and my Git repositories). I am syncing incremental (and encrypted) ZFS snapshots between these servers forth and back so either data could be recovered from the other server.</p>
+<h2>Local storage box for offline data</h2>
+<p>Also, I am operating a local server (an HP MicroServer) at home in my apartment. Full snapshots of all ZFS volumes are pulled from the "online" servers to the local server every other week and the incremental ZFS snapshots every day. That local server has a ZFS ZMIRROR with 3 disks configured (local triple redundancy). I keep up to half a year worth of ZFS snapshots of all volumes. That local server also contains all my offline data such as pictures, private documents, videos, books, various other backups, etc.</p>
+<p>Once weekly all the data of that local server is copied to two external USB drives as a backup (without the historic snapshots). For simplicity these USB drives are not formatted with ZFS but with good old UFS. This gives me a chance to recover from a (potential) ZFS disaster. ZFS is a complex thing. Sometimes it is good not to trust complex things!</p>
+<h2>Storing it at my apartment is not enough</h2>
+<p>Now I am thinking about an offsite backup of all this local data. The problem is, that all the data remains on a single physical location: My local MicroServer. What happens when the house burns or someone steals my server including the internal disks and the attached USB drives? My first thought was to back up everything to the "cloud". The major issue here is however the limited amount of available upload bandwidth (only 1MBit/s).</p>
+<p>The solution is adding another USB drive (2TB) with an encryption container (GELI) and a ZFS pool on it. The GELI encryption requires a secret key and a secret passphrase. I am updating the data to that drive once every 3 months (my calendar is reminding me about it) and afterwards I keep that drive at a secret location outside of my apartment. All the information needed to decrypt (mounting the GELI container) is stored at another (secure) place. Key and passphrase are kept at different places though. Even if someone would know of it, he would not be able to decrypt it as some additional insider knowledge would be required as well.</p>
+<h2>Walking one round less</h2>
+<p>I am thinking of buying a second 2TB USB drive and to set it up the same way as the first one. So I could alternate the backups. One drive would be at the secret location, and the other drive would be at home. And these drives would swap location after each cycle. This would give some security about the failure of that drive and I would have to go to the secret location only once (swapping the drives) instead of twice (picking that drive up in order to update the data + bringing it back to the secret location).</p>
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>The Fype Programming Language</title>
<link href="https://buetow.org/gemfeed/2010-05-09-the-fype-programming-language.html" />
<id>https://buetow.org/gemfeed/2010-05-09-the-fype-programming-language.html</id>
<updated>2010-05-09T12:48:29+01:00</updated>
- <summary>Fype is an interpreted programming language created by me for learning and fun. The interpreter is written in C. It has been tested on FreeBSD and NetBSD and may also work on other Unix like operating systems such as Linux based ones. To be honest, besides learning and fun there is really no other use case of why Fype actually exists as many other programming languages are much faster and more powerful.. .....to read on please visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>Fype is an interpreted programming language created by me for learning and fun. The interpreter is written in C. It has been tested on FreeBSD and NetBSD and may also work on other Unix like operating systems such as Linux based ones. To be honest, besides learning and fun there is really no other use case of why Fype actually exists as many other programming languages are much faster and more powerful.. .....to read on please visit my site.</summary>
+ <content type="text/html">
+ <h1>The Fype Programming Language</h1>
+<p>Fype is an interpreted programming language created by me for learning and fun. The interpreter is written in C. It has been tested on FreeBSD and NetBSD and may also work on other Unix like operating systems such as Linux based ones. To be honest, besides learning and fun there is really no other use case of why Fype actually exists as many other programming languages are much faster and more powerful.</p>
+<p>The Fype syntax is very simple and is using a maximum look ahead of 1 and a very easy top down parsing mechanism. Fype is parsing and interpreting its code simultaneously. This means, that syntax errors are only detected during program runtime. </p>
+<p>Fype is a recursive acronym and means "Fype is For Your Program Execution" or "Fype is Free Yak Programmed for ELF". You could also say "It's not a hype - it's Fype!".</p>
+<h2>Object oriented C style</h2>
+<p>The Fype interpreter is written in an object oriented style of C. Each "main component" has its own .h and .c file. There is a struct type for each (most components at least) component which can be initialized using a "COMPONENT_new" function and destroyed using a "COMPONENT_delete" function. Method calls follow the same schema, e.g. "COMPONENT_METHODNAME". There is no such as class inheritance and polymorphism involved. </p>
+<p>To give you an idea how it works here as an example is a snippet from the main Fype "class header":</p>
+<pre>
+typedef struct {
+ Tupel *p_tupel_argv; // Contains command line options
+ List *p_list_token; // Initial list of token
+ Hash *p_hash_syms; // Symbol table
+ char *c_basename;
+} Fype;
+</pre>
+<p>And here is a snippet from the main Fype "class implementation":</p>
+<pre>
+Fype*
+fype_new() {
+ Fype *p_fype = malloc(sizeof(Fype));
+
+ p_fype-&gt;p_hash_syms = hash_new(512);
+ p_fype-&gt;p_list_token = list_new();
+ p_fype-&gt;p_tupel_argv = tupel_new();
+ p_fype-&gt;c_basename = NULL;
+
+ garbage_init();
+
+ return (p_fype);
+}
+
+void
+fype_delete(Fype *p_fype) {
+ argv_tupel_delete(p_fype-&gt;p_tupel_argv);
+
+ hash_iterate(p_fype-&gt;p_hash_syms, symbol_cleanup_hash_syms_cb);
+ hash_delete(p_fype-&gt;p_hash_syms);
+
+ list_iterate(p_fype-&gt;p_list_token, token_ref_down_cb);
+ list_delete(p_fype-&gt;p_list_token);
+
+ if (p_fype-&gt;c_basename)
+ free(p_fype-&gt;c_basename);
+
+ garbage_destroy();
+}
+
+int
+fype_run(int i_argc, char **pc_argv) {
+ Fype *p_fype = fype_new();
+
+ // argv: Maintains command line options
+ argv_run(p_fype, i_argc, pc_argv);
+
+ // scanner: Creates a list of token
+ scanner_run(p_fype);
+
+ // interpret: Interpret the list of token
+ interpret_run(p_fype);
+
+ fype_delete(p_fype);
+
+ return (0);
+}
+</pre>
+<h2>Data types</h2>
+<p>Fype uses auto type conversion. However, if you want to know what's going on you may take a look at the following basic data types:</p>
+<ul>
+<li>integer - Specifies a number</li>
+<li>double - Specifies a double precision number</li>
+<li>string - Specifies a string</li>
+<li>number - May be an integer or a double number</li>
+<li>any- May be any type above</li>
+<li>void - No type</li>
+<li>identifier - It's a variable name or a procedure name or a function name</li>
+</ul>
+<p>There is no boolean type, but we can use the integer values 0 for false and 1 for true. There is support for explicit type casting too.</p>
+<h2>Syntax</h2>
+<h3>Comments</h3>
+<p>Text from a # character until the end of the current line is considered being a comment. Multi line comments may start with an #* and with a *# anywhere. Exceptions are if those signs are inside of strings.</p>
+<h3>Variables</h3>
+<p>Variables can be defined with the "my" keyword (inspired by Perl :-). If you don't assign a value during declaration, then it's using the default integer value 0. Variables may be changed during program runtime. Variables may be deleted using the "undef" keyword! Example:</p>
+<pre>
+my foo = 1 + 2;
+say foo;
+
+my bar = 12, baz = foo;
+say 1 + bar;
+say bar;
+
+my baz;
+say baz; # Will print out 0
+</pre>
+<p>You may use the "defined" keyword to check if an identifier has been defined or not:</p>
+<pre>
+ifnot defined foo {
+ say "No foo yet defined";
+}
+
+my foo = 1;
+
+if defined foo {
+ put "foo is defined and has the value ";
+ say foo;
+}
+</pre>
+<h3>Synonyms</h3>
+<p>Each variable can have as many synonyms as wished. A synonym is another name to access the content of a specific variable. Here is an example of how to use is:</p>
+<pre>
+my foo = "foo";
+my bar = \foo;
+foo = "bar";
+
+# The synonym variable should now also set to "bar"
+assert "bar" == bar;
+</pre>
+<p>Synonyms can be used for all kind of identifiers. It's not limited to normal variables but can be also used for function and procedure names etc (more about functions and procedures later).</p>
+<pre>
+# Create a new procedure baz
+proc baz { say "I am baz"; }
+
+# Make a synonym baz, and undefine baz
+my bay = \baz;
+
+undef baz;
+
+# bay still has a reference of the original procedure baz
+bay; # this prints aut "I am baz"
+</pre>
+<p>The "syms" keyword gives you the total number of synonyms pointing to a specific value:</p>
+<pre>
+my foo = 1;
+say syms foo; # Prints 1
+
+my baz = \foo;
+say syms foo; # Prints 2
+say syms baz; # Prints 2
+
+undef baz;
+say syms foo; # Prints 1
+</pre>
+<h2>Statements and expressions</h2>
+<p>A Fype program is a list of statements. Each keyword, expression or function call is part of a statement. Each statement is ended with a semicolon. Example:</p>
+<pre>
+my bar = 3, foo = 1 + 2;
+say foo;
+exit foo - bar;
+</pre>
+<h3>Parenthesis</h3>
+<p>All parenthesis for function arguments are optional. They help to make the code better readable. They also help to force precedence of expressions.</p>
+<h3>Basic expressions</h3>
+<p>Any "any" value holding a string will be automatically converted to an integer value.</p>
+<pre>
+(any) &lt;any&gt; + &lt;any&gt;
+(any) &lt;any&gt; - &lt;any&gt;
+(any) &lt;any&gt; * &lt;any&gt;
+(any) &lt;any&gt; / &lt;any&gt;
+(integer) &lt;any&gt; == &lt;any&gt;
+(integer) &lt;any&gt; != &lt;any&gt;
+(integer) &lt;any&gt; &lt;= &lt;any&gt;
+(integer) &lt;any&gt; gt &lt;any&gt;
+(integer) &lt;any&gt; &lt;&gt; &lt;any&gt;
+(integer) &lt;any&gt; gt &lt;any&gt;
+(integer) not &lt;any&gt;
+</pre>
+<h3>Bitwise expressions</h3>
+<pre>
+(integer) &lt;any&gt; :&lt; &lt;any&gt;
+(integer) &lt;any&gt; :&gt; &lt;any&gt;
+(integer) &lt;any&gt; and &lt;any&gt;
+(integer) &lt;any&gt; or &lt;any&gt;
+(integer) &lt;any&gt; xor &lt;any&gt;
+</pre>
+<h3>Numeric expressions</h3>
+<pre>
+(number) neg &lt;number&gt;
+</pre>
+<p>... returns the negative value of "number":</p>
+<pre>
+(integer) no &lt;integer&gt;
+</pre>
+<p>... returns 1 if the argument is 0, otherwise it will return 0! If no argument is given, then 0 is returned!</p>
+<pre>
+(integer) yes &lt;integer&gt;
+</pre>
+<p>... always returns 1. The parameter is optional. Example:</p>
+<pre>
+# Prints out 1, because foo is not defined
+if yes { say no defined foo; }
+</pre>
+<h2>Control statements</h2>
+<p>Control statements available in Fype:</p>
+<pre>
+if &lt;expression&gt; { &lt;statements&gt; }
+</pre>
+<p>... runs the statements if the expression evaluates to a true value.</p>
+<pre>
+ifnot &lt;expression&gt; { &lt;statements&gt; }
+</pre>
+<p>... runs the statements if the expression evaluates to a false value.</p>
+<pre>
+while &lt;expression&gt; { &lt;statements&gt; }
+</pre>
+<p>... runs the statements as long as the expression evaluates to a true value.</p>
+<pre>
+until &lt;expression&gt; { &lt;statements&gt; }
+</pre>
+<p>... runs the statements as long as the expression evaluates to a false value.</p>
+<h2>Scopes</h2>
+<p>A new scope starts with an { and ends with an }. An exception is a procedure, which does not use its own scope (see later in this manual). Control statements and functions support scopes. The "scope" function prints out all available symbols at the current scope. Here is a small example:</p>
+<pre>
+my foo = 1;
+
+{
+ # Prints out 1
+ put defined foo;
+ {
+ my bar = 2;
+
+ # Prints out 1
+ put defined bar;
+
+ # Prints out all available symbols at this
+ # point to stdout. Those are: bar and foo
+ scope;
+ }
+
+ # Prints out 0
+ put defined bar;
+
+ my baz = 3;
+}
+
+# Prints out 0
+say defined bar;
+</pre>
+<p>Another example including an actual output:</p>
+<pre>
+./fype -e ’my global; func foo { my var4; func bar { my var2, var3; func baz { my var1; scope; } baz; } bar; } foo;’
+Scopes:
+Scope stack size: 3
+Global symbols:
+SYM_VARIABLE: global (id=00034, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
+SYM_FUNCTION: foo
+Local symbols:
+SYM_VARIABLE: var1 (id=00038, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
+1 level(s) up:
+SYM_VARIABLE: var2 (id=00036, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
+SYM_VARIABLE: var3 (id=00037, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
+SYM_FUNCTION: baz
+2 level(s) up:
+SYM_VARIABLE: var4 (id=00035, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
+SYM_FUNCTION: bar
+</pre>
+<h2>Definedness </h2>
+<pre>
+(integer) defined &lt;identifier&gt;
+</pre>
+<p>... returns 1 if "identifier" has been defined. Returns 0 otherwise.</p>
+<pre>
+(integer) undef &lt;identifier&gt;
+</pre>
+<p>... tries to undefine/delete the "identifier". Returns 1 if it succeeded, otherwise 0 is returned.</p>
+<h2>System </h2>
+<p>These are some system and interpreter specific built-in functions supported:</p>
+<pre>
+(void) end
+</pre>
+<p>... exits the program with the exit status of 0.</p>
+<pre>
+(void) exit &lt;integer&gt;
+</pre>
+<p>... exits the program with the specified exit status.</p>
+<pre>
+(integer) fork
+</pre>
+<p>... forks a subprocess. It returns 0 for the child process and the pid of the child process otherwise! Example:</p>
+<pre>
+my pid = fork;
+
+if pid {
+ put "I am the parent process; child has the pid ";
+ say pid;
+
+} ifnot pid {
+ say "I am the child process";
+}
+</pre>
+<p>To execute the garbage collector do:</p>
+<pre>
+(integer) gc
+</pre>
+<p>It returns the number of items freed! You may wonder why most of the time it will return a value of 0! Fype tries to free not needed memory ASAP. This may change in future versions in order to gain faster execution speed!</p>
+<h3>I/O </h3>
+<pre>
+(any) put &lt;any&gt;
+</pre>
+<p>... prints out the argument</p>
+<pre>
+(any) say &lt;any&gt;
+</pre>
+<p>is the same as put, but also includes an ending newline.</p>
+<pre>
+(void) ln
+</pre>
+<p>... just prints a newline.</p>
+<h2>Procedures and functions</h2>
+<h3>Procedures</h3>
+<p>A procedure can be defined with the "proc" keyword and deleted with the "undef" keyword. A procedure does not return any value and does not support parameter passing. It's using already defined variables (e.g. global variables). A procedure does not have its own namespace. It's using the calling namespace. It is possible to define new variables inside of a procedure in the current namespace.</p>
+<pre>
+proc foo {
+ say 1 + a * 3 + b;
+ my c = 6;
+}
+
+my a = 2, b = 4;
+
+foo; # Run the procedure. Print out "11\n"
+say c; # Print out "6\n";
+</pre>
+<h3>Nested procedures</h3>
+<p>It's possible to define procedures inside of procedures. Since procedures don't have its own scope, nested procedures will be available to the current scope as soon as the main procedure has run the first time. You may use the "defined" keyword in order to check if a procedure has been defined or not.</p>
+<pre>
+proc foo {
+ say "I am foo";
+
+ undef bar;
+ proc bar {
+ say "I am bar";
+ }
+}
+
+# Here bar would produce an error because
+# the proc is not yet defined!
+# bar;
+
+foo; # Here the procedure foo will define the procedure bar!
+bar; # Now the procedure bar is defined!
+foo; # Here the procedure foo will redefine bar again!
+</pre>
+<h3>Functions</h3>
+<p>A function can be defined with the "func" keyword and deleted with the "undef" keyword. Function do not yet return values and do not yet supports parameter passing. It's using local (lexical scoped) variables. If a certain variable does not exist, when It's using already defined variables (e.g. one scope above). </p>
+<pre>
+func foo {
+ say 1 + a * 3 + b;
+ my c = 6;
+}
+
+my a = 2, b = 4;
+
+foo; # Run the procedure. Print out "11\n"
+say c; # Will produce an error, because c is out of scoped!
+</pre>
+<h3>Nested functions</h3>
+<p>Nested functions work the same way the nested procedures work, with the exception that nested functions will not be available anymore after the function has been left!</p>
+<pre>
+func foo {
+ func bar {
+ say "Hello i am nested";
+ }
+
+ bar; # Calling nested
+}
+
+foo;
+bar; # Will produce an error, because bar is out of scope!
+</pre>
+<h2>Arrays</h2>
+<p>Some progress on arrays has been made too. The following example creates a multi dimensional array "foo". Its first element is the return value of the func which is "bar". The fourth value is a string ”3” converted to a double number. The last element is an anonymous array which itself contains another anonymous array as its last element:</p>
+<pre>
+func bar { say ”bar” }
+my foo = [bar, 1, 4/2, double ”3”, [”A”, [”BA”, ”BB”]]];
+say foo;
+</pre>
+<p>It produces the following output:</p>
+<pre>
+% ./fype arrays.fy
+bar
+01
+2
+3.000000
+A
+BA
+BB
+</pre>
+<h2>Fancy stuff</h2>
+<p>Fancy stuff like OOP or Unicode or threading is not planed. But fancy stuff like function pointers and closures may be considered.:) </p>
+<h2>May the source be with you</h2>
+<p>You can find all of this on the GitHub page. There is also an "examples" folders containing some Fype scripts!</p>
+<a class="textlink" href="https://github.com/snonux/fype">https://github.com/snonux/fype</a><br />
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
<entry>
<title>Perl Poetry</title>
<link href="https://buetow.org/gemfeed/2008-06-26-perl-poetry.html" />
<id>https://buetow.org/gemfeed/2008-06-26-perl-poetry.html</id>
<updated>2008-06-26T21:43:51+01:00</updated>
- <summary>Here are some Perl Poems I wrote. They don't do anything useful when you run them but they don't produce a compiler error either. They only exists for fun and demonstrate what you can do with Perl syntax.. .....to read on please visit my site.</summary>
<author>
<name>Paul Buetow</name>
<email>comments@mx.buetow.org</email>
</author>
+ <summary>Here are some Perl Poems I wrote. They don't do anything useful when you run them but they don't produce a compiler error either. They only exists for fun and demonstrate what you can do with Perl syntax.. .....to read on please visit my site.</summary>
+ <content type="text/html">
+ <h1>Perl Poetry</h1>
+<pre>
+ '\|/' *
+-- * -----
+ /|\ ____
+ ' | ' {_ o^&gt; *
+ : -_ /)
+ : ( ( .-''`'.
+ . \ \ / \
+ . \ \ / \
+ \ `-' `'.
+ \ . ' / `.
+ \ ( \ ) ( .')
+ ,, t '. | / | (
+ '|``_/^\___ '| |`'-..-'| ( ()
+_~~|~/_|_|__/|~~~~~~~ | / ~~~~~ | | ~~~~~~~~
+ -_ |L[|]L|/ | |\ MJP ) )
+ ( |( / /|
+ ~~ ~ ~ ~~~~ | /\\ / /| |
+ || \\ _/ / | |
+ ~ ~ ~~~ _|| (_/ (___)_| |Nov291999
+ (__) (____)
+</pre>
+<p>Here are some Perl Poems I wrote. They don't do anything useful when you run them, but they don't produce a compiler error either. They only exist for fun and demonstrate what you can do with Perl syntax.</p>
+<p>Wikipedia: "Perl poetry is the practice of writing poems that can be compiled as legal Perl code, for example the piece known as Black Perl. Perl poetry is made possible by the large number of English words that are used in the Perl language. New poems are regularly submitted to the community at PerlMonks."</p>
+<a class="textlink" href="https://en.wikipedia.org/wiki/Perl">https://en.wikipedia.org/wiki/Perl</a><br />
+<h2>math.pl</h2>
+<pre>
+#!/usr/bin/perl
+
+# (C) 2006 by Paul C. Buetow (http://paul.buetow.org)
+
+goto library for study $math;
+BEGIN { s/earching/ books/
+and read $them, $at, $the } library:
+
+our $topics, cos and tan,
+require strict; import { of, tied $patience };
+
+do { int'egrate'; sub trade; };
+do { exp'onentize' and abs'olutize' };
+study and study and study and study;
+
+foreach $topic ({of, math}) {
+you, m/ay /go, to, limits }
+
+do { not qw/erk / unless $success
+and m/ove /o;$n and study };
+
+do { int'egrate'; sub trade; };
+do { exp'onentize' and abs'olutize' };
+study and study and study and study;
+
+grep /all/, exp'onents' and cos'inuses';
+/seek results/ for @all, log'4rithms';
+
+'you' =~ m/ay /go, not home
+unless each %book ne#ars
+$completion;
+
+do { int'egrate'; sub trade; };
+do { exp'onentize' and abs'olutize' };
+
+#at
+home: //ig,'nore', time and sleep $very =~ s/tr/on/g;
+__END__
+
+</pre>
+<h2>christmas.pl</h2>
+<pre>
+#!/usr/bin/perl
+
+# (C) 2006 by Paul C. Buetow (http://paul.buetow.org)
+
+Christmas:{time;#!!!
+
+Children: do tell $wishes;
+
+Santa: for $each (@children) {
+BEGIN { read $each, $their, wishes and study them; use Memoize#ing
+
+} use constant gift, 'wrapping';
+package Gifts; pack $each, gift and bless $each and goto deliver
+or do import if not local $available,!!! HO, HO, HO;
+
+redo Santa, pipe $gifts, to_childs;
+redo Santa and do return if last one, is, delivered;
+
+deliver: gift and require diagnostics if our $gifts ,not break;
+do{ use NEXT; time; tied $gifts} if broken and dump the, broken, ones;
+The_children: sleep and wait for (each %gift) and try { to =&gt; untie $gifts };
+
+redo Santa, pipe $gifts, to_childs;
+redo Santa and do return if last one, is, delivered;
+
+The_christmas_tree: formline s/ /childrens/, $gifts;
+alarm and warn if not exists $Christmas{ tree}, @t, $ENV{HOME};
+write &lt;&lt;EMail
+ to the parents to buy a new christmas tree!!!!111
+ and send the
+EMail
+;wait and redo deliver until defined local $tree;
+
+redo Santa, pipe $gifts, to_childs;
+redo Santa and do return if last one, is, delivered ;}
+
+END {} our $mission and do sleep until next Christmas ;}
+
+__END__
+
+This is perl, v5.8.8 built for i386-freebsd-64int
+</pre>
+<h2>shopping.pl</h2>
+<pre>
+#!/usr/bin/perl
+
+# (C) 2007 by Paul C. Buetow (http://paul.buetow.org)
+
+BEGIN{} goto mall for $shopping;
+
+m/y/; mall: seek$s, cool products(), { to =&gt; $sell };
+for $their (@business) { to:; earn:; a:; lot:; of:; money: }
+
+do not goto home and exit mall if exists $new{product};
+foreach $of (q(uality rich products)){} package products;
+
+our $news; do tell cool products() and do{ sub#tract
+cool{ $products and shift @the, @bad, @ones;
+
+do bless [q(uality)], $products
+and return not undef $stuff if not (local $available) }};
+
+do { study and study and study for cool products() }
+and do { seek $all, cool products(), { to =&gt; $buy } };
+
+do { write $them, $down } and do { order: foreach (@case) { package s } };
+goto home if not exists $more{money} or die q(uerying) ;for( @money){};
+
+at:;home: do { END{} and:; rest:; a:; bit: exit $shopping }
+and sleep until unpack$ing, cool products();
+
+__END__
+This is perl, v5.8.8 built for i386-freebsd-64int
+</pre>
+<h2>More...</h2>
+<p>Did you like what you saw? Have a look at Github to see my other poems too:</p>
+<a class="textlink" href="https://github.com/snonux/perl-poetry">https://github.com/snonux/perl-poetry</a><br />
+<p>E-Mail me your thoughts at comments@mx.buetow.org!</p>
+ </content>
</entry>
</feed>