,.......... ..........,
,..,' '.' ',..,
,' ,' : ', ',
,' ,' : ', ',
,' ,' : ', ',
,' ,'............., : ,.............', ',
,' '............ '.' ............' ',
'''''''''''''''''';''';''''''''''''''''''
'''
-=[ typewriters ]=- 1/98
.-------.
_|~~ ~~ |_ .-------.
=(_|_______|_)= _|~~ ~~ |_
|:::::::::| =(_|_______|_)
|:::::::[]| |:::::::::|
|o=======.| |:::::::[]|
jgs `"""""""""` |o=======.|
mod. by Paul Buetow `"""""""""`
# Hello world
<< echo "> This site was generated at $(date --iso-8601=seconds) by \`Gemtexter\`"
Welcome to this capsule!
<<<
for i in {1..10}; do
echo Multiline template line $i
done
>>>
# Hello world > This site was generated at 2023-03-15T19:07:59+02:00 by `Gemtexter` Welcome to this capsule! Multiline template line 1 Multiline template line 2 Multiline template line 3 Multiline template line 4 Multiline template line 5 Multiline template line 6 Multiline template line 7 Multiline template line 8 Multiline template line 9 Multiline template line 10
See more entries about DTail and Golang: << template::inline::index dtail golang Blablabla...
See more entries about DTail and Golang: => ./2022-10-30-installing-dtail-on-openbsd.gmi 2022-10-30 Installing DTail on OpenBSD => ./2022-04-22-programming-golang.gmi 2022-04-22 The Golang Programming language => ./2022-03-06-the-release-of-dtail-4.0.0.gmi 2022-03-06 The release of DTail 4.0.0 => ./2021-04-22-dtail-the-distributed-log-tail-program.gmi 2021-04-22 DTail - The distributed log tail program (You are currently reading this) Blablabla...
declare -xr PRE_GENERATE_HOOK=./pre_generate_hook.sh declare -xr POST_PUBLISH_HOOK=./post_publish_hook.sh
% cat gemfeed/2023-02-26-title-here.gmi # Title here The remaining content of the Gemtext file...
% cat gemfeed/2023-02-26-title-here.gmi # Title here > Published at 2023-02-26T21:43:51+01:00 The remaining content of the Gemtext file...
,.......... ..........,
,..,' '.' ',..,
,' ,' : ', ',
,' ,' : ', ',
,' ,' : ', ',
,' ,'............., : ,.............', ',
,' '............ '.' ............' ',
'''''''''''''''''';''';''''''''''''''''''
'''
Published at 2023-02-26T23:48:01+02:00
|\ "Music should be heard not only with the ears, but also the soul."
|---|--\-----------------------|-----------------------------------------|
| | |\ | |@ |\ |
|---|---|--\-------------------|-------------/|----|------|--\----|------|
| @| | |\ |O | 3 / | |@ | | |
|---|--@|---|--\--------|------|---------/----|----|------|-------|------|
| @| @| \ |O | / | | |@ @| @|. |
|-----------|-----|-----|------|-----/---|---@|----|--------------|------|
| @| | |O | | | | @|. |
|-----------|----@|-----|------|----|---@|------------------------|------|
@| | | Larry Komro @|.
-@- [kom...@uwec.edu]
Do you need help fully discharging from work in the evenings or for the weekend? Shutting down from work won't just improve your work-life balance; it will also significantly improve the quality of your personal life and work. After a restful weekend, you will be much more energized and productive the next working day. So it should not just be in your own, but also your employers' interest that you fully relax and shut down after work.
Have a routine. Try to finish work around the same time every day. Write any outstanding tasks down for the next day, so you are sure you will remember them. Writing them down brings wonders as you can remove them from your mind for the remainder of the day (or the upcoming weekend) as you know you will surely pick them up the next working day. Tidying up your workplace could also count toward your daily shutdown routine.
A commute home from the office also greatly helps, as it disconnects your work from your personal life. Don't work on your commute home, though! If you don't commute but work from home, then it helps to walk around the block or in a nearby park to disconnect from work.
Unless you are self-employed, you have likely signed an N-hour per week contract with your employer, and your regular working times are from X o'clock in the morning to Y o'clock in the evening (with M minutes lunch break in the middle). And there might be some flexibility in your working times, too. But that kind of flexibility (e.g. extending the lunch break so that there is time to pick up a family member from the airport) will be agreed upon, and you will counteract it, for example, by starting working earlier the next day or working late, that one exception. But overall, your weekly working time will stay N hours.
Another exception would be when you are on an on-call schedule and are expected to watch your work notifications out-of-office times. But that is usually only a few days per month and, therefore, not the norm. And it should also be compensated accordingly.
There might be some maintenance work you must carry out, which can only be done over the weekend, but it should be explicitly agreed upon and compensated for. Also, there might be a scenario that a production incident comes up shortly before the end of the work day, requiring you (and your colleagues) to stay a bit longer. But this should be an exceptional case.
Other than that, there is no reason why you should work out-of-office hours. I know many people who suffer "the fear of missing out", so slack messages and E-Mails are checked until late in the evening, during weekends or holidays. I have been improving here personally a lot over the last couple of months, but still, I fall into this trap occasionally.
Also, when you respond to slack messages and E-Mails, your colleagues can think that you have nothing better to do. They also will take it for granted and keep slacking and messaging you out of regular office times.
Checking for your messages constantly outside of regular office times makes it impossible to shut down and relax from work altogether.
Often, your mind goes back to work-related stuff even after work. That's normal as you concentrated highly on your work throughout the day. The brain unconsciously continues to work and will automatically present you with random work-related thoughts. You can counteract this by focusing on non-work stuff, which may include:
Some of these can be habit-stacked: Exercise could be combined with watching videos about your passion project (e.g. watching lectures about that new programming language you are currently learning for fun). With walking, for example, you could combine listening to an Audiobook or music, or you could also think about your passion project during that walk.
Even if you have children, it helps wonders to get a pet. My cat, for example, will remind me a few times daily to take a few minute's breaks to pet, play or give food. So my cat not only helps me after work but throughout the day.
My neighbour also works from home, and he has dogs, which he regularly has to take out to the park.
If you are upset about something, making it impossible to shut down from work, write down everything (e.g., with a pen in a paper journal). Writing things down helps you to "get rid" of the negative. Especially after conflicts with colleagues or company decisions, you don't agree on. This kind of self-therapy is excellent. Brainstorm all your emotions and (even if opinionated) opinions so you have everything on paper. Once done, you don't think about it so much anymore, as you know you can access that information if required. But stopping ruminating about it will be much easier now. You will likely never access that information again, though. But at least writing the thoughts down saved your day.
Write down three things which went well for the day. This helps you to appreciate the day.
Think about what's fun and motivates you. Maybe the next promotion to Principal or a Manager role isn't for you. Many fall into the trap of stressing themselves out to satisfy the employer so that the next upgrade will happen and think about it constantly, even after work. But it is more important that you enjoy your craftsmanship. Work on what you expect from yourself. Ideally, your goals should be aligned with your employer. I am not saying you should abandon everything what your manager is asking you to do, but it is, after all, your life. And you have to decide where and on what you want to work. But don't sell yourself short. Keep track of your accomplishments.
Every day you gave your best was good; the day's outcome doesn't matter. What matters is that you know you gave your best and are closer to your goals than the previous day. This gives you a sense of progress and accomplishment.
There are some days at work you feel drained afterwards and think you didn't progress towards your goals at all. It's more challenging to shut down from work after such a day. A quick hack is to work on a quick win before the end of the day, giving you a sense of accomplishment after all. Another way is to make progress on your fun passion project after work. It must not be work-related, but a sense of accomplishment will still be there.
E-Mail your comments to hi@paul.cyou :-)
Back to the main sitePublished at 2023-01-23T15:31:52+02:00
Art by Joan Stark
_.===========================._
.'` .- - __- - - -- --__--- -. `'.
__ / ,'` _|--|_________|--|_ `'. \
/'--| ; _.'\ | ' ' | /'._ ; |
// | |_.-' .-'.' ___ '.'-. '-._| |
(\) \"` _.-` / .-'`_ `'-. \ `-._ `"/
(\) `-' | .' .-'" "'-. '. | `-`
(\) | / .'(3)(2)(1)'. \ |
(\) | / / (4) .-. \ \ |
(\) | | |(5) ( )'==,J | |
(\) | \ \ (6) '-' (0) / / |
(\) | \ '.(7)(8)(9).' / |
(\) ___| '. '-.._..-' .' |
(\) /.--| '-._____.-' |
(\) (\) |\_ _ __ _ __ __/|
(\) (\) | |
(\)_._._.__(\) | |
(\\\\jgs\\\) '.___________________.'
'-'-'-'--'
In 2021 I wrote "On Being Pedantic about Open-Source", and there was a section "What about mobile?" where I expressed the dilemma about the necessity of using proprietary mobile operating systems. With GrapheneOS, I found my perfect solution for personal mobile phone use.
On Being Pedantic about Open-SourceWhat is GrapheneOS?
GrapheneOS is a privacy and security-focused mobile OS with Android app compatibility developed as a non-profit open-source project. It's focused on the research and development of privacy and security technologies, including substantial improvements to sandboxing, exploits mitigations and the permission model.
GrapheneOS is an independent Android distribution based on the Android Open Source Project (AOSP) but hardened in multiple ways. Other independent Android distributions, like LineageOS, are also based on AOSP, but GrapheneOS takes it further so that it can be my daily driver on my phone.
https://GrapheneOS.orgGrapheneOS allows configuring up to 32 user profiles (including a guest profile) on a single phone. A profile is a completely different environment within the phone, and it is possible to switch between them instantly. Sessions of a profile can continue running in the background or be fully terminated. Each profile can have completely different settings and different applications installed.
I use my default profile with primarily open-source applications installed, which I trust. I use another profile for banking (PayPal, various proprietary bank apps, Amazon store app, etc.) and another profile for various Google services (which I try to avoid, but I have to use once in a while). Furthermore, I have configured a profile for Social Media use (that one isn't in my default profile, as otherwise I am tempted to scroll social media all the time, which I try to avoid and only want to do intentionally when switching to the corresponding profile!).
The neat thing about the profiles is that some can run a sandboxed version of Google Play (see later in this post), while others don't. So some profiles can entirely operate without any Google Play, and only some profiles (to which I rarely switch) have Google Play enabled.
You notice how much longer (multiple days) your phone can be on a single charge when Google Play Services isn't running in the background. This tells a lot about the background activities and indicates that using Google Play shouldn't be the norm.
There's also the case that I am using an app from the Google Play store (as the app isn't available from F-Droid), which doesn't require Google Play Services to run in the background. Here's where I use the Aurora Android store. The Aurora store can be installed through F-Droid. Aurora acts as an anonymous proxy from your phone to the Google Play Store and lets you install apps from there. No Google credentials are required for that!
https://f-droid.orgThere's a similar solution for watching videos on YouTube. You can use the NewPipe app (also from F-Droid), which acts as an anonymous proxy for watching videos from YouTube. So there isn't any need to install the official YouTube app, and there isn't any need to login to your Google account. What's so bad about the official app? You don't know which data it is sending about you to Google, so it is a privacy concern.
Before switching to GrapheneOS, I had been using LineageOS on one of my phones for a couple of years. Still, I always had to have a secondary personal phone with all of these proprietary apps which (partially) only work with Google Play on the phone (e.g. Banking, Navigation, various travel apps from various Airlines, etc.) somewhere around as I didn't install Google Play on my LineageOS phone due to privacy concerns and only installed apps from the F-Droid store on it. When travelling, I always had to carry around a second phone with Google Play on it, as without it; life would become inconvenient pretty soon.
With GrapheneOS, it is different. Here, I do not just have a separate user profile, "Google", for various Google apps where Google Play runs, but Google Play also runs in a sandbox!!!
GrapheneOS has a compatibility layer providing the option to install and use the official releases of Google Play in the standard app sandbox. Google Play receives no special access or privileges on GrapheneOS instead of bypassing the app sandbox and receiving a massive amount of highly privileged access. Instead, the compatibility layer teaches it how to work within the full app sandbox. It also isn't used as a backend for the OS services as it would be elsewhere since GrapheneOS doesn't use Google Play even when it's installed.
When I need to access Google Play, I can switch to the "Google" profile. Even there, Google is sandboxed to the absolute minimum permissions required to be operational, which gives additional privacy protection.
The sad truth is that Google Maps is still the best navigation app. When driving unknown routes, I can switch to my Google profile to use Google Maps. I don't need to do that when going streets I know about, but it is crucial (for me) to have Google Maps around when driving to a new destination.
Also, Google Translate and Google Lens are still the best translation apps I know. I just recently relocated to another country, where I am still learning the language, so Google Lens has been proven very helpful on various occasions by ad-hoc translating text into English or German for me.
The same applies to banking. Many banking apps require Google Play to be available (It might be even more secure to only use banking apps from the Google Play store due to official support and security updates). I rarely need to access my mobile banking app, but once in a while, I need to. As you have guessed by now, I can switch to my banking profile (with Google Play enabled), do what I need to do, and then terminate the session and go back to my default profile, and then my life can go on :-).
It is great to have the flexibility to use any proprietary Android app when needed. That only applies to around 1% of my phone usage time, but you often don't always know when you need "that one app now". So it's perfect that it's covered with the phone you always have with you.
I really want my phone to shoot good looking pictures, so that I can later upload them to the Irregular Ninja:
https://irregular.ninjaThe stock camera app of the OASP could be better. Photos usually look washed out, and the app lacks features. With GrapheneOS, there are two options:
The GrapheneOS camera app is much better than the stock OASP camera app. I have been comparing the photo quality of my Pixel phone under LineageOS and GrapheneOS, and the differences are pronounced. I didn't compare the quality with the official Google camera app, but I have seen some comparison videos and the differences seem like they aren't groundbreaking.
For automatic backups of my photos, I am relying on a self-hosted instance of NextCloud (with a client app available via F-Droid). So there isn't any need to rely on any Google apps and services (Google Play Photos or Google Camera app) anymore, and that's great!
https://nextcloud.comI also use NextCloud to synchronize my notes (NextCloud Notes), my RSS news feeds (NextCloud News) and contacts (DAVx5). All apps required are available in the F-Droid store.
Another great thing about GrapheneOS is that, besides putting your apps into different profiles, you can also restrict network access and configure storage scopes per app individually.
For example, let's say you are installing that one proprietary app from the Google Play Store through the Aurora store, and then you want to ensure that the app doesn't send data "home" through the internet. Nothing is easier to do than that. Just remove network access permissions from that only app.
The app also wants to store and read some data from your phone (e.g. it could be a proprietary app for enhancing photos, and therefore storage access to a photo folder would be required). In GrapheneOS, you can configure a storage scope for that particular app, e.g. only read and write from one folder but still forbid access to all other folders on your phone.
Termux can be installed on any Android phone through F-Droid, so it doesn't need to be a GrapheneOS phone. But I have to mention Termux here as it significantly adds value to my phone experience.
Termux is an Android terminal emulator and Linux environment app that works directly with no rooting or setup required. A minimal base system is installed automatically - additional packages are available using the APT package manager.
https://termux.devIn short, Termux is an entire Linux environment running on your Android phone. Just pair your phone with a Bluetooth keyboard, and you will have the whole Linux experience. I am only using terminal Linux applications with Termux, though. What makes it especially great is that I could write on a new blog post (in Neovim through Termux on my phone) or do some coding whilst travelling (e.g. during a flight), or look up my passwords or some other personal documents (through my terminal-based password manager). All changes I commit to Git can be synced to the server with a simple git push once online (e.g. after the plane landed) again.
There are Pixel phones with a screen size of 6", and that's decent enough for occasional use like that, and everything (the phone, the BT keyboard, maybe an external battery pack) all fit nicely in a small travel pocket.
Strictly speaking, an Android phone is a Linux phone, but it's heavily modified and customized. For me, a "pure" Linux phone is a more streamlined Linux kernel running in a distribution like Ubuntu Touch or Mobian.
A pure Linux phone, e.g. with Ubuntu Touch installed, e.g. on a PinePhone, Fairphone, the Librem 5 or the Volla phone, is very appealing to me. And they would also provide an even better Linux experience than Termux does. Some support running LineageOS within an Anbox, enabling you to run various proprietary Android apps occasionally within Linux.
Ubuntu TouchBut here, Google Play would not be sandboxed; you could not configure individual network permissions and storage scopes like in GrapheneOS. Pure Linux-compatible phones usually come with a crappy camera, and the battery life is generally pretty bad (only a few hours). Also, no big tech company pushes the development of Linux phones. Everything relies on hobbyists, whereas multiple big tech companies put a lot of effort into the Android project, and a lot of code also goes into the Android Open-Source project.
Currently, pure Linux phones are only a nice toy to tinker with but are still not ready (will they ever?) to be the daily driver. SailfishOS may be an exception; I played around with it in the past. It is pretty usable, but it's not an option for me as it is partial a proprietary operating system.
SailfishOSSometimes, switching a profile to use a different app is annoying, and you can't copy and paste from the system clipboard from one profile to another. But that's a small price I am willing to pay!
Another thing is that GrapheneOS can only run on Google Pixel phones, whereas LineageOS can be installed on a much larger variety of hardware. But on the other hand, GrapheneOS works very well on Pixel phones. The GrapheneOS team can concentrate their development efforts on a smaller set of hardware which then improves the software's quality (best example: The camera app).
And, of course, GrapheneOS is an open-source project. This is a good thing; however, on the other side, nobody can guarantee that the OS will not break or will not damage your phone. You have to trust the GrapheneOS project and donate to the project so they can keep up with the great work. But I rather trust the GrapheneOS team than big tech.
E-Mail your comments to hi@paul.cyou :-)
Back to the main sitePublished at 2022-12-24T23:18:40+02:00

As a regular participant in the annual Pet Project competition at work, I always try to find a project where I can learn something new. In this post, I would like to share my takeaways after revisiting Java. You can read about my motivations in my "Creative universe" post:
Creative universeI have been programming in Java back in the days as a university student, and even my Diploma Thesis I implemented in Java (it would require some overhaul so that it is fully compatible with a recent version of Java, though - It still compiles and runs, but with a lot of warnings, though!):
VS-Sim: Distributed systems simulatorHowever, after that, I became a Linux Sysadmin and mainly continued programming in Perl, Puppet, bash, and a little Python. For personal use, I also programmed a bit in Haskell and C. After my Sysadmin role, I moved to London and became a Site Reliability Engineer (SRE), where I mainly programmed in Ruby, bash, Puppet and Golang and a little bit of C.
At my workplace, as an SRE, I don't do Java a lot. I have been reading Java code to understand the software better so I can apply and suggest workarounds or fixes to existing issues and bugs. However, most of our stack is in Java, and our Software Engineers use Java as their primary programming language.
Over time, I had been missing out on many new features that were added to the language since Java 1.4, so I decided to implement my next Pet Project in Java and learn every further aspect of the language as my main goal. Of course, I still liked the idea of winning a Pet Project Prize, but my main objective was to level up my Java skills.
This book was recommended by my brother and also by at least another colleague at work to be one of the best, if not the best, book about Java programming. I read the whole book from the beginning to the end and immersed myself in it. I fully agree; this is a great book. Every Java developer or Java software engineer should read it!

I recommend reading the 90-part effective Java Series on dev.to. It's a perfect companion to the book as it explains all the chapters again but from a slightly different perspective and helps you to really understand the content.
Kyle Carter's 90-part Effective Java SeriesDuring my lunch breaks, I usually have a walk around the block or in a nearby park. I used that time to listen to the Java Pub House podcast. I listened to *every* episode and learned tons of new stuff. I can highly recommend this podcast. Especially GraalVM, a high-performance JDK distribution written for Java and other JVM languages, captured my attention. GraalVM can compile Java code into native binaries, improving performance and easing the distribution of Java programs. Because of the latter, I should release a VS-Sim GraalVM edition one day through a Linux AppImage ;-).
https://www.javapubhouse.comI also watched a course on O'Reilly Safari Books online about Java Concurrency. That gave an excellent refresher on how the Java thread pools work and what were the concurrency primitives available in the standard library.
First, the source code is often the best documentation (if programmed nicely), and second, it helps to get the hang of the language and standard practices. I started to read more and more Java code at work. I did that whenever I had to understand how something, in particular, worked (e.g. while troubleshooting and debugging an issue).
Another great way to get the hang of Java again was to sneak into the code reviews of the Software Engineer colleagues. They are the expert on the matter and are a great source to copy knowledge. It's OK to stay passive and only follow the reviews. Sometimes, it's OK to step up and take ownership of the review. The developers will also always be happy to answer any naive questions which come up.
Besides my Pet Project, I also took ownership of a regular roadmap Java project at work, making an internal Java service capable of running in Kubernetes. This was a bunch of minor changes and adding a bunch of classes and unit tests dealing with the statelessness and a persistent job queue in Redis. The job also involved reading and understanding a lot of already existing Java code. It wasn't part of my job description, but it was fun, and I learned a lot. The service runs smoothly in production now. Of course, all of my code got reviewed by my Software Engineering colleagues.
From the new language features and syntaxes, there are many personal takeaways, and I can't possibly list them all, but here are some of my personal highlights:
There are also many ugly corners in Java. Many are doomed to stay there forever due to historical decisions and ensuring backward compatibility with older versions of the Java language and the Java standard library.
While (re)learning Java, I felt like a student again and was quite enthusiastic about it initially. I invested around half a year, immersing myself intensively in Java (again). The last time I did that was many years ago as a university student. I even won a Silver Prize at work, implementing a project this year (2022 as of writing this). I feel confident now with understanding, debugging and patching Java code at work, which boosted my debugging and troubleshooting skills.
I don't hate Java, but I don't love programming in it, either. I will, I guess, always see Java as the necessary to get stuff done (reading code to understand how the service works, adding a tiny feature to make my life easier, adding a quick bug fix to overcome an obstacle...).
Although Java has significantly improved since 1.4, its code still tends to be more boilerplate. Not mainly because due to lines of code (Golang code tends to be quite repetitive, primarily when no generics are used), but due to the levels of abstractions it uses. Class hierarchies can be ten classes or deeper, and it is challenging to understand what the code is doing. Good test coverage and much documentation can mitigate the problem partially. Big enterprises use Java, and that also reflects to the language. There are too many libraries and too many abstractions that are bundled with too many legacy abstractions and interfaces and too many exceptions in the library APIs. There's even an external library named Lombok, which aims to reduce Java boilerplate code. Why is there a need for an external library? It should be all part of Java itself.
https://projectlombok.org/Java needs a clean cut. The clean cut shall be incompatible with previous versions of Java and only promote modern best practices without all the legacy burden carried around. The same can be said for other languages, e.g. Perl, but in Perl, they already attack the problem with the use of flags which change the behaviour of the language to more modern standards. Or do it like Python, where they had a hard (incompatible) cut from version 2 to version 3. It will be painful, for sure. But that would be the only way I would enjoy using that language as one of my primary languages to code new stuff regularly. Currently, my Java will stay limited to very few projects and the more minor things already mentioned in this post.
Am I a Java expert now? No, by far not. But I am better now than before :-).
E-Mail your comments to hi@paul.cyou :-)
Back to the main sitePublished at 2022-11-24T11:17:15+02:00; Updated at 2022-11-26
_/ \ _(\(o
/ \ / _ ^^^o
/ ! \/ ! '!!!v'
! ! \ _' ( \____
! . \ _!\ \===^\)
Art by \ \_! / __!
Gunnar Z. \! / \ <--- Emacs is a giant dragon
(\_ _/ _\ )
\ ^^--^^ __-^ /(__
^^----^^ "^--v'
As a long-lasting user of Vim (and NeoVim), I always wondered what GNU Emacs is really about, so I decided to try it. I didn't try vanilla GNU Emacs, but Doom Emacs. I chose Doom Emacs as it is a neat distribution of Emacs with Evil mode enabled by default. Evil mode allows Vi(m) key bindings (so to speak, it's emulating Vim within Emacs), and I am pretty sure I won't be ready to give up all the muscle memory I have built over more than a decade.
GNU EmacsI used Doom Emacs for around two months. Still, ultimately I decided to switch back to NeoVim as my primary editor and IDE and Vim (usually pre-installed on Linux-based systems) and Nvi (usually pre-installed on *BSD systems) as my "always available editor" for quick edits. (It is worth mentioning that I don't have a high opinion on whether Vim or NeoVim is the better editor, I prefer NeoVim as it comes with better defaults out of the box, but there is no real blocker to use Vim instead).
VimSo why did I switch back to the Vi-family?
Emacs feels like a giant dragon as it is much more than an editor or an integrated development environment. Emacs is a whole platform on its own. There's an E-Mail client, an IRC client, or even games you can run within Emacs. And you can also change Emacs within Emacs using its own Lisp dialect, Emacs Lisp (Emacs is programmed in Emacs Lisp). Therefore, Emacs is also its own programming language. You can change every aspect of Emacs within Emacs itself. People jokingly state Emacs is an operating system and that you should directly use it as the init 1 process (if you don't know what the init 1 process is: Under UNIX and similar operating systems, it's the very first userland processed launched. That's usually systemd on Linux-based systems, launchd on macOS, or any other init script or init system used by the OS)!
In many aspects, Emacs is like shooting at everything with a bazooka! However, I prefer it simple. I only wanted Emacs to be a good editor (which it is, too), but there's too much other stuff in Emacs that I don't need to care about! Vim and NeoVim do one thing excellent: Being great text editors and, when loaded with plugins, decent IDEs, too.
I almost fell in love with Magit, an integrated Git client for Emacs. But I think the best way to interact with Git is to use the git command line directly. I don't worry about typing out all the commands, as the most commonly used commands are in my shell history. Other useful Git programs I use frequently are bit and tig. Also, get a mechanical keyboard that makes hammering whole commands into the terminal even more enjoyable.
MagitMagit is pretty neat for basic Git operations, but I found myself searching the internet for the correct sub-commands to do the things I wanted to do in Git. Mainly, the way how branches are managed is confusing. Often, I fell back to the command line to fix up the mess I produced with Magit (e.g. accidentally pushing to the wrong remote branch, so I found myself fixing things manually on the terminal with the git command with forced pushes....). Magit is hotkey driven, and common commands are quickly explorable through built-in hotkey menus. Still, I found it challenging to navigate to more advanced Git sub-commands that way which was much easier accomplished by using the git command directly.
If there is one thing I envy about Emacs is that it's a graphical program, whereas the Vi-family of editors are purely terminal-based. I see the benefits of being a graphical program as this enables the use of multiple fonts simultaneously to embed pictures and graphs (that would be neat as a Markdown preview, for example). There's also GVim (Vim with GTK UI), but that's more of an afterthought.
There are now graphical front-end clients for NeoVim, but I still need to dig into them. Let me know your experience if you have one. Luckily, I don't rely on something graphical in my text editor, but it would improve how the editor looks and feels. UTF8 can already do a lot in the terminal, and terminal emulators also allow you to use TrueType fonts. Still, you will always be limited to one TTF font for the whole terminal, and it isn't possible to have, for example, a different font for headings, paragraphs, etc... you get the idea. TTF+UTF8 can't beat authentic graphics.
It is possible to customize every aspect of Emacs through Emacs Lisp. I have done some Elk Scheme programming in the past (a dialect of Lisp), but that was a long time ago, and I am not willing to dive here again to customize my environment. I would instead take the pragmatic approach and script what I need in VimScript (a terrible language, but it gets the job done!). I watched Damian Conway's VimScript course on O'Reilly Safari Books Online, which I greatly recommend. Yes, VimScript feels clunky, funky and weird and is far less elegant than Lisp, but it gets its job done - in most cases! (That reminds me that the Vim team has announced a new major version of VimScript with improvements and language changes made - I haven't gotten to it yet - but I assume that VimScript will always stay VimScript).
Emacs LispNeoVim is also programmable with Lua, which seems to be a step up and Vim comes with a Perl plugin API (which was removed from NeoVim, but that is a different story - why would someone remove the most potent mature text manipulation programming language from one of the most powerful text editors?).
NeoVim Lua APIOne example is my workflow of how I compose my blog articles (e.g. this one you are currently reading): I am writing everything in NeoVim, but I also want to have every paragraph checked against Grammarly (as English is not my first language). So I write a whole paragraph, then I select the entire paragraph via visual selection with SHIFT+v, and then I press ,y to yank the paragraph to the systems clipboard, then I paste the paragraph to Grammarly's browser window with CTRL+v, let Grammarly suggest the improvements, and then I copy the result back with CTRL+c to the system clipboard and in NeoVim I type ,i to insert the result back overriding the old paragraph (which is still selected in visual mode) with the new content. That all sounds a bit complicated, but it's surprisingly natural and efficient.
To come back to the example, for the clipboard integration, I use this small VimScript snippet, and I didn't have to dig into any Lisp or Perl for this:
" Clipboard vnoremap ,y !pbcopy<CR>ugv vnoremap ,i !pbpaste<CR> nmap ,i !wpbpaste<CR>
That's only a very few lines and does precisely what I want. It's quick and dirty but get's the job done! If VimScript becomes too cumbersome, I can use Lua for NeoVim scripting.
Org-mode is an Emacs mode for keeping notes, authoring documents, computational notebooks, literate programming, maintaining to-do lists, planning projects, and more — in a fast and effective plain-text system. There's even a dedicated website for it:
https://orgmode.org/In short, Org-mode is an "interactive markup language" that helps you organize everything mentioned above. I rarely touched the surface during my two-month experiment with Emacs, and I am impressed by it, so I see the benefits of having that. But it's not for me.
I use "Dead Tree Mode" to organize my work and notes. Dead tree? Yeah, I use an actual pen and a real paper journal (Leuchtturm or a Moleskine and a set of coloured 0.5 Muji Pens are excellent choices). That's far more immersive and flexible than a computer program can ever be. Yes, some automation and interaction with the computer (like calendar scheduling etc.) are missing. Still, an actual paper journal forces you to stay simple and focus on the actual work rather than tinkering with your computer program. (But I could not resist, and I wrote a VimScript which parses a table of contents page in Markdown format of my scanned paper journals, and NeoVim allows me to select a topic so that the corresponding PDF scan on the right journal page gets opened in an external PDF viewer (the PDF viewer is zathura, it uses Vi-keybindings, of course) :-). (See the appendix of this blog post for that script).
ZathuraOn the road, I also write some of my notes in Markdown format to NextCloud Notes, which is editable from my phone and via NeoVim on my computers. Markdown is much less powerful than Org-mode, but I prefer it the simple way. There's a neat terminal application, ranger, which I use to browse my NextCloud Notes when they are synced to a local folder on my machine. ranger is a file manager inspired by Vim and therefore makes use of Vim keybindings and it feels just natural to me.
Ranger - A Vim inspired file managerDid I mention that I also use my zsh (my default shell) and my tmux (terminal multiplexer) in Vi-mode?
Z shellI am not ready to dive deep into the whole world of Emacs. I prefer small and simple tools as opposed to complex tools. Emacs comes with many features out of the box, whereas in Vim/NeoVim, you would need to install many plugins to replicate some of the behaviour. Yes, I need to invest time managing all the Vim/NeoVim plugins I use, but I feel more in control compared to Doom Emacs, where a framework around vanilla Emacs manages all the plugins. I could use vanilla Emacs and manage all my plugins the vanilla way, but for me, it's not worth the effort to learn and dive into that as all that I want to do I can already do with Vim/NeoVim.
I am not saying that Vim/NeoVim are simple programs, but they are much simpler than Emacs with much smaller footprints; furthermore, they appear to be more straightforward as I am used to them. I only need Vim/NeoVim to be an editor, an IDE (through some plugins), and nothing more.
I understand the Emacs users now. Emacs is an incredibly powerful platform for almost everything, not just text editing. With Emacs, you can do nearly everything (Writing, editing, programming, calendar scheduling and note taking, Jira integration, playing games, listening to music, reading/writing emails, browsing the web, using as a calculator, generating HTML pages, configuring interactive menus, jumping around between every feature and every file within one single session, chat on IRC, surf the Gopherspace, ... the options are endless....). If you want to have one piece of software which rules it all and you are happy to invest a large part of your time in your platform: Pick Emacs, and over time Emacs will become "your" Emacs, customized to your own needs and change the way it works, which makes the Emacs users stick even more to it.
Vim/NeoVim also comes with a very high degree of customization options, but to a lesser extreme than Emacs (but still, a much higher degree than most other editors out there). If you want the best text editor in the world, which can also be tweaked to be a decent IDE, you are only looking for: Pick Vim or NeoVim! You would also need to invest a lot of time in learning, tweaking and customizing Vim/NeoVim, but that's a little more straightforward, and the result is much more lightweight once you get used to the "Vi way of doing things" you never would want to change back. I haven't tried the Emacs vanilla keystrokes, but they are terrible (that's probably one of the reasons why Doom Emacs uses Vim keybindings by default).
Update: One reader recommended to have a look at NvChad. NvChad is a NeoVim config written in Lua aiming to provide a base configuration with very beautiful UI and blazing fast startuptime (around 0.02 secs ~ 0.07 secs). They tweak UI plugins such as telescope, nvim-tree, bufferline etc well to provide an aesthetic UI experience. That sounds interesting!
https://github.com/NvChad/NvChadE-Mail your comments to hi@paul.cyou :-)
Back to the main siteThis is the VimScript I mentioned earlier, which parses a table of contents index of my scanned paper journals and opens the corresponding PDF at the right page in zathura:
function! ReadJournalPageNumber()
let page = expand("<cword>")
if page !~# '^\d\+$'
for str in split(getline("."), "[ ,]")
if str =~# '^\d\+$'
let page = str
break
end
endfor
endif
return page
endfunction
function! ReadJournalMeta()
normal! mj
1/MetaFilePath:/
normal! 3w
let s:metaFilePath = expand("<cWORD>")
echom s:metaFilePath
1/MetaOffset:/
normal! 3w
let s:metaOffset = expand("<cword>")
echom s:metaOffset
1/MetaPageAtOffset:/
normal! 3w
let s:metaPageAtOffset = expand("<cword>")
echom s:metaPageAtOffset
1/MetaPagesPerScan:/
normal! 3w
let s:metaPagesPerScan = expand("<cword>")
echom s:metaPagesPerScan
normal! `j
endfunction
function! GetPdfPage(page)
return s:metaOffset + (a:page - s:metaPageAtOffset) / s:metaPagesPerScan
endfunction
function! OpenJournalPage()
let page = ReadJournalPageNumber()
if page !~# '^\d\+$'
echoerr "Could not identify Journal page number"
end
call ReadJournalMeta()
let pdfPage = GetPdfPage(page)
echon "Location is " . s:metaFilePath . ":" . pdfPage
call system("zathura --mode fullscreen -P " . pdfPage . " " . s:metaFilePath)
" call system("evince -p " . pdfPage . " " . s:metaFilePath)
endfunction
nmap ,j :call OpenJournalPage()<CR>
,_---~~~~~----._
_,,_,*^____ _____``*g*\"*,
/ __/ /' ^. / \ ^@q f
@f | | | | 0 _/
\`/ \~__((@/ __ \__((@/ \
| _l__l_ I <--- The Go Gopher
} [______] I
] | | | |
] ~ ~ |
| |
| |
| | A ;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~~~,--,-/ \---,-/|~~,~~~~~~~~~~~~~~~~~~~~~~~~~~~
_|\,'. /| /| `/|-.
\`.' /| , `;.
,'\ A A A A _ /| `.;
,/ _ A _ / _ /| ;
/\ / \ , , A / / `/|
/_| | _ \ , , ,/ \
// | |/ `.\ ,- , , ,/ ,/ \/
/ @| |@ / /' \ \ , > /| ,--.
|\_/ \_/ / | | , ,/ \ ./' __:..
| __ __ | | | .--. , > > |-' / `
,/| / ' \ | | | \ , | /
/ |<--.__,->| | | . `. > > / (
/_,' \\ ^ / \ / / `. >-- /^\ |
\\___/ \ / / \__' \ \ \/ \ |
`. |/ , , /`\ \ )
\ ' |/ , V \ / `-\
OpenBSD Puffy ---> `|/ ' V V \ \.' \_
'`-. V V \./'\
`|/-. \ / \ /,---`\ kat
/ `._____V_____V'
' '
$ doas pkg_add git go gmake
$ mkdir git $ cd git $ git clone https://github.com/mimecast/dtail $ cd dtail $ gmake
$ ./dtail --version DTail 4.1.0 Protocol 4.1 Have a lot of fun! $ file dtail dtail: ELF 64-bit LSB executable, x86-64, version 1
$ doas pkg_delete git go gmake
$ for bin in dserver dcat dgrep dmap dtail dtailhealth; do doas cp -p $bin /usr/local/bin/$bin doas chown root:wheel /usr/local/bin/$bin done
$ doas adduser -class nologin -group _dserver -batch _dserver $ doas usermod -d /var/run/dserver/ _dserver
$ cat <<'END' | doas tee /etc/rc.d/dserver
#!/bin/ksh
daemon="/usr/local/bin/dserver"
daemon_flags="-cfg /etc/dserver/dtail.json"
daemon_user="_dserver"
. /etc/rc.d/rc.subr
rc_reload=NO
rc_pre() {
install -d -o _dserver /var/log/dserver
install -d -o _dserver /var/run/dserver/cache
}
rc_cmd $1 &
END
$ doas chmod 755 /etc/rc.d/dserver
desc 'Setup DTail';
task 'dtail', group => 'frontends',
sub {
my $restart = FALSE;
file '/etc/rc.d/dserver':
content => template('./etc/rc.d/dserver.tpl'),
owner => 'root',
group => 'wheel',
mode => '755',
on_change => sub { $restart = TRUE };
.
.
.
.
service 'dserver' => 'restart' if $restart;
service 'dserver', ensure => 'started';
};
$ doas mkdir /etc/dserver
$ curl https://raw.githubusercontent.com/mimecast/dtail/master/samples/dtail.json.sample |
doas tee /etc/dserver/dtail.json
"Common": {
"LogDir": "/var/log/dserver",
"Logger": "Fout",
"LogRotation": "Daily",
"CacheDir": "cache",
"SSHPort": 2222,
"LogLevel": "Info"
}
file '/etc/dserver',
ensure => 'directory';
file '/etc/dserver/dtail.json',
content => template('./etc/dserver/dtail.json.tpl'),
owner => 'root',
group => 'wheel',
mode => '755',
on_change => sub { $restart = TRUE };
$ cat <<'END' | doas tee /usr/local/bin/dserver-update-key-cache.sh
#!/bin/ksh
CACHEDIR=/var/run/dserver/cache
DSERVER_USER=_dserver
DSERVER_GROUP=_dserver
echo 'Updating SSH key cache'
ls /home/ | while read remoteuser; do
keysfile=/home/$remoteuser/.ssh/authorized_keys
if [ -f $keysfile ]; then
cachefile=$CACHEDIR/$remoteuser.authorized_keys
echo "Caching $keysfile -> $cachefile"
cp $keysfile $cachefile
chown $DSERVER_USER:$DSERVER_GROUP $cachefile
chmod 600 $cachefile
fi
done
# Cleanup obsolete public SSH keys
find $CACHEDIR -name \*.authorized_keys -type f |
while read cachefile; do
remoteuser=$(basename $cachefile | cut -d. -f1)
keysfile=/home/$remoteuser/.ssh/authorized_keys
if [ ! -f $keysfile ]; then
echo 'Deleting obsolete cache file $cachefile'
rm $cachefile
fi
done
echo 'All set...'
END
$ doas chmod 500 /usr/local/bin/dserver-update-key-cache.sh
$ echo /usr/local/bin/dserver-update-key-cache.sh | doas tee -a /etc/daily.local /usr/local/bin/dserver-update-key-cache.sh
file '/usr/local/bin/dserver-update-key-cache.sh',
content => template('./scripts/dserver-update-key-cache.sh.tpl'),
owner => 'root',
group => 'wheel',
mode => '500';
append_if_no_such_line '/etc/daily.local', '/usr/local/bin/dserver-update-key-cache.sh';
$ sudo rcctl enable dserver $ sudo rcctl start dserver $ tail -f /var/log/dserver/*.log INFO|1022-090634|Starting scheduled job runner after 2s INFO|1022-090634|Starting continuous job runner after 2s INFO|1022-090644|24204|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnections=0 INFO|1022-090654|24204|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnections=0 INFO|1022-090719|Starting server|DTail 4.1.0 Protocol 4.1 Have a lot of fun! INFO|1022-090719|Generating private server RSA host key INFO|1022-090719|Starting server INFO|1022-090719|Binding server|0.0.0.0:2222 INFO|1022-090719|Starting scheduled job runner after 2s INFO|1022-090719|Starting continuous job runner after 2s INFO|1022-090729|86050|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnections=0 INFO|1022-090739|86050|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnect . . . Ctr+C
$ doas /usr/local/bin/dserver-update-key-cache.sh Updating SSH key cache Caching /home/_dserver/.ssh/authorized_keys -> /var/cache/dserver/_dserver.authorized_keys Caching /home/admin/.ssh/authorized_keys -> /var/cache/dserver/admin.authorized_keys Caching /home/failunderd/.ssh/authorized_keys -> /var/cache/dserver/failunderd.authorized_keys Caching /home/git/.ssh/authorized_keys -> /var/cache/dserver/git.authorized_keys Caching /home/paul/.ssh/authorized_keys -> /var/cache/dserver/paul.authorized_keys Caching /home/rex/.ssh/authorized_keys -> /var/cache/dserver/rex.authorized_keys All set...
❯ ./dgrep -user rex -servers blowfish.buetow.org,fishfinger.buetow.org --regex local /etc/fstab
CLIENT|earth|WARN|Encountered unknown host|{blowfish.buetow.org:2222 0xc0000a00f0 0xc0000a61e0 [blowfish.buetow.org]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9ZnF/LAk14SgqCzk38yENVTNfqibcluMTuKx1u53cKSp2xwHWzy0Ni5smFPpJDIQQljQEJl14ZdXvhhjp1kKHxJ79ubqRtIXBlC0PhlnP8Kd+mVLLHYpH9VO4rnaSfHE1kBjWkI7U6lLc6ks4flgAgGTS5Bb7pLAjwdWg794GWcnRh6kSUEQd3SftANqQLgCunDcP2Vc4KR9R78zBmEzXH/OPzl/ANgNA6wWO2OoKKy2VrjwVAab6FW15h3Lr6rYIw3KztpG+UMmEj5ReexIjXi/jUptdnUFWspvAmzIl6kwzzF8ExVyT9D75JRuHvmxXKKjyJRxqb8UnSh2JD4JN [23.88.35.144]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9ZnF/LAk14SgqCzk38yENVTNfqibcluMTuKx1u53cKSp2xwHWzy0Ni5smFPpJDIQQljQEJl14ZdXvhhjp1kKHxJ79ubqRtIXBlC0PhlnP8Kd+mVLLHYpH9VO4rnaSfHE1kBjWkI7U6lLc6ks4flgAgGTS5Bb7pLAjwdWg794GWcnRh6kSUEQd3SftANqQLgCunDcP2Vc4KR9R78zBmEzXH/OPzl/ANgNA6wWO2OoKKy2VrjwVAab6FW15h3Lr6rYIw3KztpG+UMmEj5ReexIjXi/jUptdnUFWspvAmzIl6kwzzF8ExVyT9D75JRuHvmxXKKjyJRxqb8UnSh2JD4JN 0xc0000a2180}
CLIENT|earth|WARN|Encountered unknown host|{fishfinger.buetow.org:2222 0xc0000a0150 0xc000460110 [fishfinger.buetow.org]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNiikdL7+tWSN0rCaw1tOd9aQgeUFgb830V9ejkyJ5h93PKLCWZSMMCtiabc1aUeUZR//rZjcPHFLuLq/YC+Y3naYtGd6j8qVrcfG8jy3gCbs4tV9SZ9qd5E24mtYqYdGlee6JN6kEWhJxFkEwPfNlG+YAr3KC8lvEAE2JdWvaZavqsqMvHZtAX3b25WCBf2HGkyLZ+d9cnimRUOt+/+353BQFCEct/2mhMVlkr4I23CY6Tsufx0vtxx25nbFdZias6wmhxaE9p3LiWXygPWGU5iZ4RSQSImQz4zyOc9rnJeP1rwGk0OWDJhdKNXuf0kIPdzMfwxv2otgY32/DJj6L [46.23.94.99]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNiikdL7+tWSN0rCaw1tOd9aQgeUFgb830V9ejkyJ5h93PKLCWZSMMCtiabc1aUeUZR//rZjcPHFLuLq/YC+Y3naYtGd6j8qVrcfG8jy3gCbs4tV9SZ9qd5E24mtYqYdGlee6JN6kEWhJxFkEwPfNlG+YAr3KC8lvEAE2JdWvaZavqsqMvHZtAX3b25WCBf2HGkyLZ+d9cnimRUOt+/+353BQFCEct/2mhMVlkr4I23CY6Tsufx0vtxx25nbFdZias6wmhxaE9p3LiWXygPWGU5iZ4RSQSImQz4zyOc9rnJeP1rwGk0OWDJhdKNXuf0kIPdzMfwxv2otgY32/DJj6L 0xc0000a2240}
Encountered 2 unknown hosts: 'blowfish.buetow.org:2222,fishfinger.buetow.org:2222'
Do you want to trust these hosts?? (y=yes,a=all,n=no,d=details): a
CLIENT|earth|INFO|STATS:STATS|cgocalls=11|cpu=8|connected=2|servers=2|connected%=100|new=2|throttle=0|goroutines=19
CLIENT|earth|INFO|Added hosts to known hosts file|/home/paul/.ssh/known_hosts
REMOTE|blowfish|100|7|fstab|31bfd9d9a6788844.h /usr/local ffs rw,wxallowed,nodev 1 2
REMOTE|fishfinger|100|7|fstab|093f510ec5c0f512.h /usr/local ffs rw,wxallowed,nodev 1 2
❯ ./dgrep -user rex -servers blowfish.buetow.org,fishfinger.buetow.org --regex local /etc/fstab REMOTE|blowfish|100|7|fstab|31bfd9d9a6788844.h /usr/local ffs rw,wxallowed,nodev 1 2 REMOTE|fishfinger|100|7|fstab|093f510ec5c0f512.h /usr/local ffs rw,wxallowed,nodev 1 2
Published at 2022-09-30T09:53:23+03:00; Updated at 2022-10-12
z
z
Z
.--. Z Z
/ _(c\ .-. __
| / / '-; \'-'` `\______
\_\/'/ __/ ) / ) | \--,
| \`""`__-/ .'--/ /--------\ \
\\` ///-\/ / /---;-. '-'
jgs (________\ \
'-'
Everyone has it once in a while: A bad night's sleep. Here I attempt to list valuable tips on how to deal with it.
Don't take a day off after not sleeping enough the previous night. That would be wasting the holiday allowance. It wouldn't be possible to enjoy my free time anyway, so why not just work? There's still a way for an IT Engineer to be productive (sometimes even more) with half or less of the concentration power available!
Probably I am already awake early and am unable to fall asleep again. My strategy here is to "attack" the day: Start work early and finish early. The early bird will also encounter fewer distractions from colleagues.
There's never a shortage of small items to hook off my list. Most of these items don't require my full concentration power, and I will be happy to get them off my list so that the next day, after a good night's sleep, I can immerse myself again in focused, deep work with all concentration powers at hand.
Examples of "small work items" are:
I find it easy to enter the "flow state" after a bad night's sleep. All I need to do is to put on some ambient music (preferably instrumental chill house) and start to work on a not-too-difficult ticket.
Usually, the "flow state" is associated with deep-focused work, but deep-focused work isn't easily possible under sleep deprivation. It's still possible to be in the flow by working on more manageable tasks and leaving the difficult ones for the next day.
I find engaging in discussions and demanding meetings challenging after a lousy night's sleep. I still attend the sessions I am invited to as "only" a participant, but I prefer to reschedule all meetings I am the primary driver of.
This, unfortunately, also includes interviews. Interviews require full concentration power. So for interviews, I would find a colleague to step in for me or ask to reschedule the interview altogether. Everything else wouldn't make it justice and would waste everyone's time!
The mind works differently under sleep deprivation: It's easier to invent new stuff as it's easier to have a look at things from different perspectives. Until an hour ago, I didn't know yet what I would be blogging about for this month, and then I just started writing this, and it took me only half an hour to write the first draft of this blog post!
I don't eat breakfast, and I don't eat lunch on these days. I only have dinner. Not eating means my mind doesn't get foggy, and I keep up the work momentum. This is called intermittent fasting, which not only generally helps to keep the weight under control and boosts the concentration power. Furthermore, intermittent fasting is healthy. You should include it in your routine, even after a good night's sleep.
I won't have enough energy for strenuous physical exercise on those days, but a 30 to a 60-minute stretching session can make the day. Stretching will even hurt less under sleep deprivation! The stretching could also be substituted with a light Yoga session.
Walking is healthy, and the time can be used to listen to interesting podcasts. The available concentration power might not be enough for more sophisticated audio literature. I will have enough energy for one or two daily walks (~10k steps for the day in total). Sometimes, I listen to music during walks. I also try to catch the bright sunlight.
I don't think that Red Bull is a healthy drink. But once in a while, a can in the early afternoon brings wonders, and productivity will skyrocket. Other than Red Bull, drink a lot of water throughout the day. Don't forget to drink the sugar-free version; otherwise, your intermittent fast will be broken.
I don't know how to "enforce" a nap, but sometimes I manage to power nap, and it helps wonders. A 30-minute nap sometimes brings me back to normal. If you don't tend to fast as you are too hungry, it helps to try to nap approximately 30 minutes after eating something.
It's much more challenging to keep the mind "under control" in this state. Every annoyance can potentially upset, which could reflect on the work colleagues. It is wise to attempt to go with a positive attitude into the day, always smile and be polite to the family and colleagues at work. Don't let anything drop out to the people next; they don't deserve it as they didn't do anything wrong! Also, remember, it can't be controlled at all. It's time to let go of the annoyances for the day.
To keep the good vibe, it helps to meditate for 10 minutes. Meditation must nothing be fancy. It can be just lying on the sofa and observing your thoughts as they come and go. Don't judge your thoughts, as that could put you in a negative mood. It's not necessary to sit in an uncomfortable Yoga pose, and it is not required to chant "Ohhmmmmm".
Sometimes something requiring more concentration power demands time. This is where it helps to write a note in a journal and return to it another day. This doesn't mean slacking off but managing the rarely available concentration power for the day. I might repeat myself: Today, sweat all the small stuff. Tomorrow, do the deep-focused work on that crucial project again.
It's easier to forget things on those days, so everything should be written down so that it can be worked off later. Things written down will not be overlooked!
I wouldn't say I like checking social media, as it can consume a lot of time and can become addictive. But once in a while, I want to catch up with my "networks". After a bad night's sleep, it's the perfect time to check your social media. Once done, you don't have to do it anymore for the next couple of days!
E-Mail your comments to hi@paul.cyou :-)
Back to the main site
-=[ typewriter ]=- 1/98
.-------.
_|~~ ~~ |_
=(_|_______|_)=
|:::::::::|
|:::::::[]|
|o=======.|
jgs `"""""""""`
check_dependencies () {
# At least, Bash 5 is required
local -i required_version=5
IFS=. read -ra version <<< "$BASH_VERSION"
if [ "${version[0]}" -lt $required_version ]; then
log ERROR "ERROR, \"bash\" must be at least at major version $required_version!"
exit 2
fi
# These must be the GNU versions of the commands
for tool in $DATE $SED $GREP; do
if ! $tool --version | grep -q GNU; then
log ERROR "ERROR, \"$tool\" command is not the GNU version, please install!"
exit 2
fi
done
}
./gemtexter --generate '.*hello.*'
Published at 2022-07-30T12:14:31+01:00
/ _ \
The Hebern Machine \ ." ". /
___ / \
.."" "".. | O |
/ \ | |
/ \ | |
---------------------------------
_/ o (O) o _ |
_/ ." ". |
I/ _________________/ \ |
_/I ." | |
===== / I / / |
===== | | | \ | _________________." |
===== | | | | | / \ / _|_|__|_|_ __ |
| | | | | | | \ "._." / o o \ ." ". |
| --| --| -| / \ _/ / \ |
\____\____\__| \ ______ | / | | |
-------- --- / | | |
( ) (O) / \ / |
----------------------- ".__." |
_|__________________________________________|_
/ \
/________________________________________________\
ASCII Art by John Savard
I was amazed at how easy it is to automatically generate and update Let's Encrypt certificates with OpenBSD.
Let's Encrypt is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security (TLS) encryption at no charge. It is the world's largest certificate authority, used by more than 265 million websites, with the goal of all websites being secure and using HTTPS.
Source: WikipediaIn short, it gives away TLS certificates for your website - for free! The catch is, that the certificates are only valid for three months. So it is better to automate certificate generation and renewals.
acme-client is the default Automatic Certifcate Management Environment (ACME) client on OpenBSD and part of the OpenBSD base system.
When invoked, the client first checks whether certificates actually require to be generated.
Oversimplified, the following steps are undertaken by acme-client for generating a new certificate:
There is some (but easy) configuration required to make that all work on OpenBSD.
This is how my /etc/acme-client.conf looks like (I copied a template from /etc/examples/acme-client.conf to /etc/acme-client.conf and added my domains to the bottom:
#
# $OpenBSD: acme-client.conf,v 1.4 2020/09/17 09:13:06 florian Exp $
#
authority letsencrypt {
api url "https://acme-v02.api.letsencrypt.org/directory"
account key "/etc/acme/letsencrypt-privkey.pem"
}
authority letsencrypt-staging {
api url "https://acme-staging-v02.api.letsencrypt.org/directory"
account key "/etc/acme/letsencrypt-staging-privkey.pem"
}
authority buypass {
api url "https://api.buypass.com/acme/directory"
account key "/etc/acme/buypass-privkey.pem"
contact "mailto:me@example.com"
}
authority buypass-test {
api url "https://api.test4.buypass.no/acme/directory"
account key "/etc/acme/buypass-test-privkey.pem"
contact "mailto:me@example.com"
}
domain buetow.org {
alternative names { www.buetow.org paul.buetow.org }
domain key "/etc/ssl/private/buetow.org.key"
domain full chain certificate "/etc/ssl/buetow.org.fullchain.pem"
sign with letsencrypt
}
domain dtail.dev {
alternative names { www.dtail.dev }
domain key "/etc/ssl/private/dtail.dev.key"
domain full chain certificate "/etc/ssl/dtail.dev.fullchain.pem"
sign with letsencrypt
}
domain foo.zone {
alternative names { www.foo.zone }
domain key "/etc/ssl/private/foo.zone.key"
domain full chain certificate "/etc/ssl/foo.zone.fullchain.pem"
sign with letsencrypt
}
domain irregular.ninja {
alternative names { www.irregular.ninja }
domain key "/etc/ssl/private/irregular.ninja.key"
domain full chain certificate "/etc/ssl/irregular.ninja.fullchain.pem"
sign with letsencrypt
}
domain snonux.land {
alternative names { www.snonux.land }
domain key "/etc/ssl/private/snonux.land.key"
domain full chain certificate "/etc/ssl/snonux.land.fullchain.pem"
sign with letsencrypt
}
For ACME to work, you will need to configure the HTTP daemon so that the "special" ACME requests from Let's Encrypt are served correctly. I am using the standard OpenBSD httpd here. These are the snippets I use for the foo.zone host in /etc/httpd.conf (of course, you need a similar setup for all other hosts as well):
server "foo.zone" {
listen on * port 80
location "/.well-known/acme-challenge/*" {
root "/acme"
request strip 2
}
location * {
block return 302 "https://$HTTP_HOST$REQUEST_URI"
}
}
server "foo.zone" {
listen on * tls port 443
tls {
certificate "/etc/ssl/foo.zone.fullchain.pem"
key "/etc/ssl/private/foo.zone.key"
}
location * {
root "/htdocs/gemtexter/foo.zone"
directory auto index
}
}
As you see, plain HTTP only serves the ACME challenge path. Otherwise, it redirects the requests to TLS. The TLS section then attempts to use the Let's Encrypt certificates.
It is worth noticing that httpd will start without the certificates being present. This will cause a certificate error when you try to reach the HTTPS endpoint, but it helps to bootstrap Let's Encrypt. As you saw in the config snippet above, Let's Encrypt only requests the plain HTTP endpoint for the verification process, so HTTPS doesn't need to be operational yet at this stage. But once the certificates are generated, you will have to reload or restart httpd to use any new certificate.
You could now run doas acme-client foo.zone to generate the certificate or to renew it. Or you could automate it with CRON.
I have created a script /usr/local/bin/acme.sh for that for all of my domains:
#!/bin/sh
function handle_cert {
host=$1
# Create symlink, so that relayd also can read it.
crt_path=/etc/ssl/$host
if [ -e $crt_path.crt ]; then
rm $crt_path.crt
fi
ln -s $crt_path.fullchain.pem $crt_path.crt
# Requesting and renewing certificate.
/usr/sbin/acme-client -v $host
}
has_update=no
handle_cert www.buetow.org
if [ $? -eq 0 ]; then
has_update=yes
fi
handle_cert www.paul.buetow.org
if [ $? -eq 0 ]; then
has_update=yes
fi
handle_cert www.tmp.buetow.org
if [ $? -eq 0 ]; then
has_update=yes
fi
handle_cert www.dtail.dev
if [ $? -eq 0 ]; then
has_update=yes
fi
handle_cert www.foo.zone
if [ $? -eq 0 ]; then
has_update=yes
fi
handle_cert www.irregular.ninja
if [ $? -eq 0 ]; then
has_update=yes
fi
handle_cert www.snonux.land
if [ $? -eq 0 ]; then
has_update=yes
fi
# Pick up the new certs.
if [ $has_update = yes ]; then
/usr/sbin/rcctl reload httpd
/usr/sbin/rcctl reload relayd
/usr/sbin/rcctl restart smtpd
fi
And added the following line to /etc/daily.local to run the script once daily so that certificates will be renewed fully automatically:
/usr/local/bin/acme.sh
I am receiving a daily output via E-Mail like this now:
Running daily.local: acme-client: /etc/ssl/buetow.org.fullchain.pem: certificate valid: 80 days left acme-client: /etc/ssl/paul.buetow.org.fullchain.pem: certificate valid: 80 days left acme-client: /etc/ssl/tmp.buetow.org.fullchain.pem: certificate valid: 80 days left acme-client: /etc/ssl/dtail.dev.fullchain.pem: certificate valid: 80 days left acme-client: /etc/ssl/foo.zone.fullchain.pem: certificate valid: 80 days left acme-client: /etc/ssl/irregular.ninja.fullchain.pem: certificate valid: 80 days left acme-client: /etc/ssl/snonux.land.fullchain.pem: certificate valid: 79 days left
Besides httpd, relayd (mainly for Gemini) and smtpd (for mail, of course) also use TLS certificates. And as you can see in acme.sh, the services are reloaded or restarted (smtpd doesn't support reload) whenever a certificate is generated or updated.
I didn't write all these configuration files by hand. As a matter of fact, everything is automated with the Rex configuration management system.
https://www.rexify.orgAt the top of the Rexfile I define all my hosts:
our @acme_hosts = qw/buetow.org paul.buetow.org tmp.buetow.org dtail.dev foo.zone irregular.ninja snonux.land/;
ACME will be installed into the frontend group of hosts. Here, blowfish is the primary, and twofish is the secondary OpenBSD box.
group frontends => 'blowfish.buetow.org', 'twofish.buetow.org';
This is my Rex task for the general ACME configuration:
desc 'Configure ACME client';
task 'acme', group => 'frontends',
sub {
file '/etc/acme-client.conf',
content => template('./etc/acme-client.conf.tpl',
acme_hosts => \@acme_hosts,
is_primary => $is_primary),
owner => 'root',
group => 'wheel',
mode => '644';
file '/usr/local/bin/acme.sh',
content => template('./scripts/acme.sh.tpl',
acme_hosts => \@acme_hosts,
is_primary => $is_primary),
owner => 'root',
group => 'wheel',
mode => '744';
file '/etc/daily.local',
ensure => 'present',
owner => 'root',
group => 'wheel',
mode => '644';
append_if_no_such_line '/etc/daily.local', '/usr/local/bin/acme.sh';
};
And there is also a Rex task just to run the ACME script remotely:
desc 'Invoke ACME client';
task 'acme_invoke', group => 'frontends',
sub {
say run '/usr/local/bin/acme.sh';
};
Furthermore, this snippet (also at the top of the Rexfile) helps to determine whether the current server is the primary server (all hosts will be without the www. prefix) or the secondary server (all hosts will be with the www. prefix):
# Bootstrapping the FQDN based on the server IP as the hostname and domain
# facts aren't set yet due to the myname file in the first place.
our $fqdns = sub {
my $ipv4 = shift;
return 'blowfish.buetow.org' if $ipv4 eq '23.88.35.144';
return 'twofish.buetow.org' if $ipv4 eq '108.160.134.135';
Rex::Logger::info("Unable to determine hostname for $ipv4", 'error');
return 'HOSTNAME-UNKNOWN.buetow.org';
};
# To determine whether the server is the primary or the secondary.
our $is_primary = sub {
my $ipv4 = shift;
$fqdns->($ipv4) eq 'blowfish.buetow.org';
};
The following is the acme-client.conf.tpl Rex template file used for the automation. You see that the www. prefix isn't sent for the primary server. E.g. foo.zone will be served by the primary server (in my case, a server located in Germany) and www.foo.zone by the secondary server (in my case, a server located in Japan):
#
# $OpenBSD: acme-client.conf,v 1.4 2020/09/17 09:13:06 florian Exp $
#
authority letsencrypt {
api url "https://acme-v02.api.letsencrypt.org/directory"
account key "/etc/acme/letsencrypt-privkey.pem"
}
authority letsencrypt-staging {
api url "https://acme-staging-v02.api.letsencrypt.org/directory"
account key "/etc/acme/letsencrypt-staging-privkey.pem"
}
authority buypass {
api url "https://api.buypass.com/acme/directory"
account key "/etc/acme/buypass-privkey.pem"
contact "mailto:me@example.com"
}
authority buypass-test {
api url "https://api.test4.buypass.no/acme/directory"
account key "/etc/acme/buypass-test-privkey.pem"
contact "mailto:me@example.com"
}
<%
our $primary = $is_primary->($vio0_ip);
our $prefix = $primary ? '' : 'www.';
%>
<% for my $host (@$acme_hosts) { %>
domain <%= $prefix.$host %> {
domain key "/etc/ssl/private/<%= $prefix.$host %>.key"
domain full chain certificate "/etc/ssl/<%= $prefix.$host %>.fullchain.pem"
sign with letsencrypt
}
<% } %>
And this is the acme.sh.tpl:
#!/bin/sh
<%
our $primary = $is_primary->($vio0_ip);
our $prefix = $primary ? '' : 'www.';
-%>
function handle_cert {
host=$1
# Create symlink, so that relayd also can read it.
crt_path=/etc/ssl/$host
if [ -e $crt_path.crt ]; then
rm $crt_path.crt
fi
ln -s $crt_path.fullchain.pem $crt_path.crt
# Requesting and renewing certificate.
/usr/sbin/acme-client -v $host
}
has_update=no
<% for my $host (@$acme_hosts) { -%>
handle_cert <%= $prefix.$host %>
if [ $? -eq 0 ]; then
has_update=yes
fi
<% } -%>
# Pick up the new certs.
if [ $has_update = yes ]; then
/usr/sbin/rcctl reload httpd
/usr/sbin/rcctl reload relayd
/usr/sbin/rcctl restart smtpd
fi
These are the Rex tasks setting up httpd, relayd and smtpd services:
desc 'Setup httpd';
task 'httpd', group => 'frontends',
sub {
append_if_no_such_line '/etc/rc.conf.local', 'httpd_flags=';
file '/etc/httpd.conf',
content => template('./etc/httpd.conf.tpl',
acme_hosts => \@acme_hosts,
is_primary => $is_primary),
owner => 'root',
group => 'wheel',
mode => '644',
on_change => sub { service 'httpd' => 'restart' };
service 'httpd', ensure => 'started';
};
desc 'Setup relayd';
task 'relayd', group => 'frontends',
sub {
append_if_no_such_line '/etc/rc.conf.local', 'relayd_flags=';
file '/etc/relayd.conf',
content => template('./etc/relayd.conf.tpl',
ipv6address => $ipv6address,
is_primary => $is_primary),
owner => 'root',
group => 'wheel',
mode => '600',
on_change => sub { service 'relayd' => 'restart' };
service 'relayd', ensure => 'started';
};
desc 'Setup OpenSMTPD';
task 'smtpd', group => 'frontends',
sub {
Rex::Logger::info('Dealing with mail aliases');
file '/etc/mail/aliases',
source => './etc/mail/aliases',
owner => 'root',
group => 'wheel',
mode => '644',
on_change => sub { say run 'newaliases' };
Rex::Logger::info('Dealing with mail virtual domains');
file '/etc/mail/virtualdomains',
source => './etc/mail/virtualdomains',
owner => 'root',
group => 'wheel',
mode => '644',
on_change => sub { service 'smtpd' => 'restart' };
Rex::Logger::info('Dealing with mail virtual users');
file '/etc/mail/virtualusers',
source => './etc/mail/virtualusers',
owner => 'root',
group => 'wheel',
mode => '644',
on_change => sub { service 'smtpd' => 'restart' };
Rex::Logger::info('Dealing with smtpd.conf');
file '/etc/mail/smtpd.conf',
content => template('./etc/mail/smtpd.conf.tpl',
is_primary => $is_primary),
owner => 'root',
group => 'wheel',
mode => '644',
on_change => sub { service 'smtpd' => 'restart' };
service 'smtpd', ensure => 'started';
};
This is the httpd.conf.tpl:
<%
our $primary = $is_primary->($vio0_ip);
our $prefix = $primary ? '' : 'www.';
%>
# Plain HTTP for ACME and HTTPS redirect
<% for my $host (@$acme_hosts) { %>
server "<%= $prefix.$host %>" {
listen on * port 80
location "/.well-known/acme-challenge/*" {
root "/acme"
request strip 2
}
location * {
block return 302 "https://$HTTP_HOST$REQUEST_URI"
}
}
<% } %>
# Gemtexter hosts
<% for my $host (qw/foo.zone snonux.land/) { %>
server "<%= $prefix.$host %>" {
listen on * tls port 443
tls {
certificate "/etc/ssl/<%= $prefix.$host %>.fullchain.pem"
key "/etc/ssl/private/<%= $prefix.$host %>.key"
}
location * {
root "/htdocs/gemtexter/<%= $host %>"
directory auto index
}
}
<% } %>
# DTail special host
server "<%= $prefix %>dtail.dev" {
listen on * tls port 443
tls {
certificate "/etc/ssl/<%= $prefix %>dtail.dev.fullchain.pem"
key "/etc/ssl/private/<%= $prefix %>dtail.dev.key"
}
location * {
block return 302 "https://github.dtail.dev$REQUEST_URI"
}
}
# Irregular Ninja special host
server "<%= $prefix %>irregular.ninja" {
listen on * tls port 443
tls {
certificate "/etc/ssl/<%= $prefix %>irregular.ninja.fullchain.pem"
key "/etc/ssl/private/<%= $prefix %>irregular.ninja.key"
}
location * {
root "/htdocs/irregular.ninja"
directory auto index
}
}
# buetow.org special host.
server "<%= $prefix %>buetow.org" {
listen on * tls port 443
tls {
certificate "/etc/ssl/<%= $prefix %>buetow.org.fullchain.pem"
key "/etc/ssl/private/<%= $prefix %>buetow.org.key"
}
block return 302 "https://paul.buetow.org"
}
server "<%= $prefix %>paul.buetow.org" {
listen on * tls port 443
tls {
certificate "/etc/ssl/<%= $prefix %>paul.buetow.org.fullchain.pem"
key "/etc/ssl/private/<%= $prefix %>paul.buetow.org.key"
}
block return 302 "https://foo.zone/contact-information.html"
}
server "<%= $prefix %>tmp.buetow.org" {
listen on * tls port 443
tls {
certificate "/etc/ssl/<%= $prefix %>tmp.buetow.org.fullchain.pem"
key "/etc/ssl/private/<%= $prefix %>tmp.buetow.org.key"
}
root "/htdocs/buetow.org/tmp"
directory auto index
}
and this the relayd.conf.tpl:
<%
our $primary = $is_primary->($vio0_ip);
our $prefix = $primary ? '' : 'www.';
%>
log connection
tcp protocol "gemini" {
tls keypair <%= $prefix %>foo.zone
tls keypair <%= $prefix %>buetow.org
}
relay "gemini4" {
listen on <%= $vio0_ip %> port 1965 tls
protocol "gemini"
forward to 127.0.0.1 port 11965
}
relay "gemini6" {
listen on <%= $ipv6address->($hostname) %> port 1965 tls
protocol "gemini"
forward to 127.0.0.1 port 11965
}
And last but not least, this is the smtpd.conf.tpl:
<% our $primary = $is_primary->($vio0_ip); our $prefix = $primary ? '' : 'www.'; %> pki "buetow_org_tls" cert "/etc/ssl/<%= $prefix %>buetow.org.fullchain.pem" pki "buetow_org_tls" key "/etc/ssl/private/<%= $prefix %>buetow.org.key" table aliases file:/etc/mail/aliases table virtualdomains file:/etc/mail/virtualdomains table virtualusers file:/etc/mail/virtualusers listen on socket listen on all tls pki "buetow_org_tls" hostname "<%= $prefix %>buetow.org" #listen on all action localmail mbox alias <aliases> action receive mbox virtual <virtualusers> action outbound relay match from any for domain <virtualdomains> action receive match from local for local action localmail match from local for any action outbound
For the complete Rexfile example and all the templates, please look at the Git repository:
https://codeberg.org/snonux/rexfilesBesides ACME, other things, such as DNS servers, are also rexified. The following command will run all the Rex tasks and configure everything on my frontend machines automatically:
rex commons
The commons is a group of tasks I specified which combines a set of common tasks I always want to execute on all frontend machines. This also includes the ACME tasks mentioned in this article!
ACME and Let's Encrypt greatly help reduce recurring manual maintenance work (creating and renewing certificates). Furthermore, all the certificates are free of cost! I love to use OpenBSD and Rex to automate all of this.
OpenBSD suits perfectly here as all the tools are already part of the base installation. But I like underdogs. Rex is not as powerful and popular as other configuration management systems (e.g. Puppet, Chef, SALT or even Ansible). It is more of an underdog, and the community is small.
Why re-inventing the wheel? I love that a Rexfile is just a Perl DSL. Also, OpenBSD comes with Perl in the base system. So no new programming language had to be added to my mix for the configuration management system. Also, the acme.sh shell script is not a Bash but a standard Bourne shell script, so I didn't have to install an additional shell as OpenBSD does not come with the Bash pre-installed.
E-Mail your comments to hi@paul.cyou :-)
Back to the main sitePublished at 2022-06-15T08:47:44+01:00; Updated at 2022-06-18
_
/_/_ .'''.
=O(_)))) ...' `.
jgs \_\ `. .'''
`..'
This blog post is a bit different from the others. It consists of multiple but smaller projects worth mentioning. I got inspired by Julia Evan's "Tiny programs" blog post and the side projects of The Sephist, so I thought I would also write a blog posts listing a couple of small projects of mine:
Tiny programsWorking on tiny projects is a lot of fun as you don't need to worry about any standards or code reviews and you decide how and when you work on it. There aren't restrictions regarding technologies used. You are likely the only person working on these tiny projects and that means that there is no conflict with any other developers. This is complete freedom :-).
But before going through the tiny projects let's take a paragraph for the 1y anniversary retrospective.
It has been one year since I started posting regularly (at least once monthly) on this blog again. It has been a lot of fun (and work) doing so for various reasons:
Retrospectively, these have been the most popular blog posts of mine over the last year:
Keep it simple and stupidBut now, let's continue with the small projects worth mentioning :-)
photoalbum.sh is a minimal static HTML photo album generator. I use it to drive "The Irregular Ninja" site and for some ad-hoc (personal) albums to share photos with the family and friends.
https://codeberg.org/snonux/photoalbumPhotography is one of my casual hobbies. I love to capture interesting perspectives and motifs. I love to walk new streets and neighbourhoods I never walked before so I can capture those unexpected motifs, colours and moments. Unfortunately, because of time constraints (and sometime weather constraints), I do that on a pretty infrequent basis.

More than 10 years ago I wrote the bespoke small static photo album generator in Bash photoalbum.sh which I recently refactored to a modern Bash coding style and also freshened up the Cascading Style Sheets. Last but not least, the new domain name irregular.ninja has been registered.
The thumbnails are presented in a random order and there are also random CSS effects for each preview. There's also a simple background blur for each page generated. And that's all in less than 300 lines of Bash code! The script requires ImageMagick (available for all common Linux and *BSD distributions) to be installed.
As you can see, there is a lot of randomization and irregularity going on. Thus, the name "Irregular Ninja" was born.
https://irregular.ninjaI only use a digital compact camera or a smartphone to take the photos. I don't like the idea of carrying around a big camera with me "just in case" so I keep it small and simple. The best camera is the camera you have with you. :-)
I hope you like this photo site. It's worth checking it out again around once every other month!
I bullet journal. I write my notes into a Leuchtturm paper notebook. Once full, I am scanning it to a PDF file and archive it. As of writing this, I am at journal #7 (each from 123 up to 251 pages in A5). It means that there is a lot of material already.
Once in a while I want to revisit older notes and ideas. For that I have written a simple Bash script randomjournalpage.sh which randomly picks a PDF file from a folder and extracts 42 pages from it at a random page offset and opens them in a PDF viewer (Evince in this case, as I am a GNOME user).
https://codeberg.org/snonux/randomjournalpageThere's also a weekly CRON job on my servers to send me a reminder that I might want to read in my old journals again. My laptop also runs this script each time it boots and saves the output to a NextCloud folder. From there, it's synchronized to the NextCloud server so I can pick it up from there with my smartphone later when I am "on the road".
guprecords is a Perl script which reads multiple uprecord files (produced by uptimed - a widely available daemon for recording server uptimes) and generates uptime statistics of multiple hosts combined. I keep all the record files of all my personal computers in a Git repository (I even keep the records of the boxes I don't own or use anymore) and there's already quite a collection of it. It looks like this:
❯ perl ~/git/guprecords/src/guprecords --indir=./stats/ --count=20 --all Pos | System | Kernel | Uptime | Boot time 1 | sun | FreeBSD 10.1-RELEA.. | 502d 03:29:19 | Sun Aug 16 15:56:40 2015 2 | vulcan | Linux 3.10.0-1160... | 313d 13:19:39 | Sun Jul 25 18:32:25 2021 3 | uugrn | FreeBSD 10.2-RELEASE | 303d 15:19:35 | Tue Dec 22 21:33:07 2015 4 | uugrn | FreeBSD 11.0-RELEA.. | 281d 14:38:04 | Fri Oct 21 15:22:02 2016 5 | deltavega | Linux 3.10.0-957.2.. | 279d 11:15:00 | Sun Jun 30 11:42:38 2019 6 | vulcan | Linux 3.10.0-957.2.. | 279d 11:12:14 | Sun Jun 30 11:43:41 2019 7 | deltavega | Linux 3.10.0-1160... | 253d 04:42:22 | Sat Apr 24 13:34:34 2021 8 | host0 | FreeBSD 6.2-RELEAS.. | 240d 02:23:23 | Wed Jan 31 20:34:46 2007 9 | uugrn | FreeBSD 11.1-RELEA.. | 202d 21:12:41 | Sun May 6 18:06:17 2018 10 | tauceti | Linux 3.2.0-4-amd64 | 197d 18:45:40 | Mon Dec 16 19:47:54 2013 11 | pluto | Linux 2.6.32-5-amd64 | 185d 11:53:04 | Wed Aug 1 07:34:10 2012 12 | sun | FreeBSD 10.3-RELEA.. | 164d 22:31:55 | Sat Jul 22 18:47:21 2017 13 | vulcan | Linux 3.10.0-1160... | 161d 07:08:43 | Sun Feb 14 10:05:38 2021 14 | sun | FreeBSD 10.3-RELEA.. | 158d 21:18:36 | Sat Jan 27 10:18:57 2018 15 | uugrn | FreeBSD 11.1-RELEA.. | 157d 20:57:24 | Fri Nov 3 05:02:54 2017 16 | tauceti-f | Linux 3.2.0-3-amd64 | 150d 04:12:38 | Mon Sep 16 09:02:58 2013 17 | tauceti | Linux 3.2.0-4-amd64 | 149d 09:21:43 | Mon Aug 11 09:47:50 2014 18 | pluto | Linux 3.2.0-4-amd64 | 142d 02:57:31 | Mon Sep 8 01:59:02 2014 19 | tauceti-f | Linux 3.2.0-3-amd64 | 132d 22:46:26 | Mon May 6 11:11:35 2013 20 | keppler-16b | Darwin 13.4.0 | 131d 08:17:12 | Thu Jun 11 10:44:25 2015
It can also sum up all uptimes for each host to generate a total per host uptime top list:
❯ perl ~/git/guprecords/src/guprecords --indir=./stats/ --count=20 --total Pos | System | Kernel | Uptime | 1 | uranus | Linux 5.4.17-200.f.. | 1419d 19:05:39 | 2 | sun | FreeBSD 10.1-RELEA.. | 1363d 11:41:14 | 3 | vulcan | Linux 3.10.0-1160... | 1262d 20:27:48 | 4 | uugrn | FreeBSD 10.2-RELEASE | 1219d 15:10:16 | 5 | deltavega | Linux 3.10.0-957.2.. | 1115d 06:33:55 | 6 | pluto | Linux 2.6.32-5-amd64 | 1086d 10:44:05 | 7 | tauceti | Linux 3.2.0-4-amd64 | 846d 12:58:21 | 8 | tauceti-f | Linux 3.2.0-3-amd64 | 625d 07:16:39 | 9 | host0 | FreeBSD 6.2-RELEAS.. | 534d 19:50:13 | 10 | keppler-16b | Darwin 13.4.0 | 448d 06:15:00 | 11 | tauceti-e | Linux 3.2.0-4-amd64 | 415d 18:14:13 | 12 | moon | Darwin 18.7.0 | 326d 11:21:42 | 13 | callisto | Linux 4.0.4-303.fc.. | 303d 12:18:24 | 14 | alphacentauri | FreeBSD 10.1-RELEA.. | 300d 20:15:00 | 15 | earth | Linux 5.13.14-200... | 289d 08:05:05 | 16 | makemake | Linux 5.11.9-200.f.. | 286d 21:53:03 | 17 | london | Linux 3.2.0-4-amd64 | 258d 15:10:38 | 18 | fishbone | OpenBSD 4.1 .. | 223d 05:55:26 | 19 | sagittarius | Darwin 15.6.0 | 198d 23:53:59 | 20 | mars | Linux 3.2.0-4-amd64 | 190d 05:44:21 |
This all is of no real practical use but fun!
The rexfiles project contains all Rex files for my (personal) server setup automation. A Rexfile is written in a Perl DSL run by the Rex configuration management system. It's pretty much KISS and that's why I love it. It suits my personal needs perfectly.
https://codeberg.org/snonux/rexfilesThis is an E-Mail I posted to the Rex mailing list:
Hi there! I was searching for a simple way to automate my personal OpenBSD setup. I found that configuration management systems like Puppet, Salt, Chef, etc.. were too bloated for my personal needs. So for a while I was configuring everything by hand. At one point I got fed up and started writing Shell scripts. But that was not the holy grail so that I looked at Ansible. I found that Ansible had some dependencies on Python on the target machine when you want to use all the features. Furthermore, I am not really familiar with Python. But then I remembered that there was also Rex. It's written in my beloved Perl. Also, OpenBSD comes with Perl in the base system out of the box which makes it integrate better than all my scripts (automation and also scripts deployed via the automation to the system) are all in the same language. Rex may not have all the features like other configuration management systems, but its easy to work-around or extend when you know Perl. Thanks!
rubyfy is a fancy SSH loop wrapper written in Ruby for running shell commands on multiple remote servers at once. I also forked this project for work (under a different name) where I added even more features such as automatic server discovery. It's used by many colleagues on a frequent basis. Here are some examples:
# Run command 'hostname' on server foo.example.com
./rubyfy.rb -c 'hostname' <<< foo.example.com
# Run command 'id' as root (via sudo) on all servers listed in the list file
# Do it on 10 servers in parallel
./rubyfy.rb --parallel 10 --root --command 'id' < serverlist.txt
# Run a fancy script in background on 50 servers in parallel
./rubyfy.rb -p 50 -r -b -c '/usr/local/scripts/fancy.zsh' < serverlist.txt
# Grep for specific process on both servers and write output to ./out/grep.txt
echo {foo,bar}.example.com | ./rubyfy.rb -p 10 -c 'pgrep -lf httpd' -n grep.txt
# Reboot server only if file /var/run/maintenance.lock does NOT exist!
echo foo.example.com |
./rubyfy.rb --root --command reboot --precondition /var/run/maintenance.lock
dyndns is a tiny shell script which implements "your" own DynDNS service. It relies on SSH access to the authoritative DNS server and the nsupdate command. There is really no need to use any of the "other" free DynDNS services out there.
Syntax (this must run from the client connecting to the DNS server through SSH):
ssh dyndns@dyndnsserver /path/to/dyndns-update \
your.host.name. TYPE new-entry TIMEOUT
This is a real world example:
ssh dyndns@dyndnsserver /path/to/dyndns-update \ local.buetow.org. A 137.226.50.91 30
This is a tiny GNU Awk script for Linux which displays information about the CPU. All what it does is presenting /proc/cpuinfo in an easier to read way. The output is somewhat more compact than the standard lscpu command you find commonly on Linux distributions.
❯ ./cpuinfo cpuinfo (c) 1.0.2 Paul Buetow 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz GenuineIntel 12288 KB cache p = 001 Physical processors c = 004 Cores s = 008 Siblings (Hyper-Threading enabled if s != c) v = 008 [v = p*c*(s != c ? 2 : 1)] Total logical CPUs Hyper-Threading is enabled 0003000 MHz each core 0012000 MHz total 0005990 Bogomips each processor (including virtual) 0023961 Bogomips total
This is a shell wrapper to use the standard diff tool over the network to compare a file between two computers. It uses NetCat for the network part and also encrypts all traffic using OpenSSL. This is how its used:
1. Open two terminal windows and login to two different hosts (you could use ClusterSSH or tmux here). 2. Run on the first host netdiff otherhost.example.org /file/to/diff.txt and run on the second host netdiff firsthost.example.org /file/to/diff.txt. 3. You then will see the file differences.
https://codeberg.org/snonux/netdiffThis is a shell script for the Mutt email client for delaying sending out E-Mails. For example, you want to write an email on Saturday but don't want to bother the recipient earlier than Monday. It relies on CRON.
https://codeberg.org/snonux/muttdelayjsmstrade is a minimalistic graphical Java swing client for sending SMS messages over the SMStrade service.

ipv6test is a quick and dirty Perl CGI script for testing whether your browser connects via IPv4 or IPv6. It requires you to setup three sub-domains: One reachable only via IPv4 (e.g. test4.ipv6.buetow.org), another reachable only via IPv6 (e.g. test6.ipv6.buetow.org) and the main one reachable through both protocols (e.g. ipv6.buetow.org).
I don't have it running on any of my servers at the moment. This means that there is no demo to show now. Sorry!
japi s a small Perl script for listing open Jira issues. It might be broken by now as the Jira APIs may have changed. Sorry! But feel free to fork and modernize it. :-)
https://codeberg.org/snonux/jsmstrade
Debroid is a tutorial and a set of scripts to install and to run a Debian chroot on an Android phone.
Check out my previous post about itI am not using Debroid anymore as I have switched to Termux now.
https://termux.comPerlDaemon is a minimal daemon for Linux and other Unix like operating systems programmed in Perl. It is a minimal but pretty functional and fairly generic service framework. This means that it does not do anything useful other than providing a framework for starting, stopping, configuring and logging. To do something useful, a module (written in Perl) must be provided.
Checkout my previous post about itThere are more projects on my Codeberg page but they aren't as tiny as the ones mentioned in this post or aren't finished yet so I won't bother listing them here. However, there also a few more scripts used frequently by me (not publicly accessible (yet?)) which I would like to mention here:
worktime.rb, for example, is a command line Ruby script I use to track my time spent working. This is to make sure that I don't overwork (in particular useful when working from home). It also generates some daily and weekly stats and carries over work time (surpluses or minuses) to the next work day, week or even year.
It has some special features such as tracking time for self-improvement/development, days off and time spent at the lunch break and time spent on Pet Projects.
An example weekly report looks like this (I often don't track my lunch time but what I do instead I stop the work timer when I go out for lunch and start the work timer once back at the desk):
Mon 20211213 50: work:5.92h
Tue 20211214 50: work:7.47h lunch:0.50h pet:0.42h
Wed 20211215 50: work:8.86h pet:0.50h
Thu 20211216 50: work:8.02h pet:0.50h
Fri 20211217 50: work:9.81h
* Sat 20211218 50: work:0.00h selfdevelopment:1.00h
* Sun 20211219 50: work:2.08h pet:1.00h selfdevelopment:-2.08h
================================================
balance:0.06h work:42.15h lunch:0.50h pet:2.42h selfdevelopment:-1.08h buffer:8.38h
All I do when I start work is to run the wtlogin command and after finishing work to run the wtlogout command. My shell will remind me when I work without having logged in. It uses a simple JSON database which is editable with wtedit (this opens the JSON in Vim). The report shown above can be generated with wtreport. Any out-of-bounds reporting can be added with the wtadd command.
geheim.rb is my personal password and document store ("geheim" is the German word for secret). It's written in Ruby and heavily relies on Git, FZF (for search), Vim and standard encryption algorithms. Other than the standard pass Unix password manager, geheim also encrypts the file names and password titles.
The tool is command line driven but also provides an interactive shell when invoked with geheim shell. It also works on my Android phone via Termux so I have all my documents and passwords always with me.
backup is a Bash script which does run once daily (or every time on boot) on my home FreeBSD NAS server and performs backup related tasks such as creating a local backup of my remote NextCloud instance, creating encrypted (incremental) ZFS snapshots of everything what's stored on the NAS and synchronizes (via rsync) backups to a remote cloud storage. It also can synchronize backups to a local external USB drive.
Check out my offsite backup seriesHere's a bonus...
▄ █ ▄ ▄ █ ▄ ▄ █ ▄
▄▀█▀▄ ▄▀█▀▄ ▄▀█▀▄
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▀ ▀ ▀
█ ▄▄ ▄▄ █
█ █ █▀▀▀█ █ █ █ ▄▀ ▄▀▀▀▀▄ █▄ █ █▀▀▀▀▀▄ ▄▀▀▀▀▄ █ ▀▀▀█▀▀▀ ▄▀▀▀▀▄
█ ▀▀▀▀▀▀▀▀▀ █ █ ▄█ █ █ █ ▀▄ █ █▄▄▄▄▄▀ █▄▄▄▄▄▄█ █ █ █ █
█ ▄▀▀▀▀▀▀▀▀▀▀▀▄ █ █▀ ▀▄ ▀▄ ▄▀ █ ▀▄█ █ ▀▄ ▄ █ █ ▀▄ ▄▀
▀▄█▄█▄▄▄▄▄▄▄█▄█▄▀ ▀ ▀ ▀▀▀▀ ▀ ▀ ▀ ▀▀▀▀ ▀ ▀ ▀▀▀
*THIS ISN'T MY PROJECT* but I found KONPEITO an interesting Gemini capsule. It's a quarterly released Low-Fi music mix tape distributed only through Gemini (and not the web).
gemini://konpeito.mediaIf you wonder what Gemini is:
Welcome to the GeminispaeE-Mail your comments to hi@paul.cyou :-)
Back to the main site

Published at 2022-04-10T10:09:11+01:00; Updated at 2022-04-18
. + . . . . . .
. . . *
. * . . . . . . + .
"You Are Here" . . + . . .
. | . . . . . .
| . . . +. + .
\|/ . . . .
. . V . * . . . . + .
+ . . . +
. . + .+. .
. . . + . . . . .
. . . . . . . . ! /
* . . . + . . - O -
. . . + . . * . . / |
. + . . . .. + .
. . . . * . * . +.. . *
. . . . . . . . + . . +
- the universe
I have been participating in an annual work-internal project contest (we call it Pet Project contest) since I moved to London and switched jobs to my current employer. I am very happy to say that I won a "silver" prize last week here 🎆. Over the last couple of years I have been a finalist in this contest six times and won some kind of prize five times. Some of my projects were also released as open source software. One had a magazine article published, and for another one I wrote an article on my employer's engineering blog. If you have followed all my posts on this blog (the one you are currently reading), then you have probably figured out what these projects were:
DTail - The distributed log tail programNote that my latest silver prize project isn't open source software and because of that there is no public material I can refer to. Maybe the next one again?
I want to point out that I never won the "gold" prize and it's the first time I won "silver", though. I believe, looking at the company's contest history, I am the employee with the most consecutive successful project submissions (my streak broke as I didn't participate last year) and am also the one with the highest successful project count in total. Sorry if this all sounds a bit self-promotional, but I think it is something to be proud of. Consistency beats a one-off success.
I often put endless hours and sometimes sleepless nights into such projects and all of that in my own time. I, an engineer whose native tongue is not English, also have to present such a project in front of the CEO, CTO and CPO, the Chief Scientist, the founders of the company, and, if it is not enough, to all other staff of the company too. I usually also demonstrate a working prototype live on a production grid during the presentation. 😓
So why would I sign up myself for such side projects? Isn't it a lot of stress and extra work? Besides the prize in form of money (you can not count on that, you may win or you may not win something) and recognition, there are also other motivational points:
How did I manage to be creative with all these Pet Projects? Unfortunately, there is no step-by-step guide I could point you to. But what I want to do in this blog post is share my personal experience so far.
There must be a problem to be solved or a thing to be improved. It makes no sense to have a project without a goal. A problem might be obvious to you, and you don't even need to think about it. In that case, you are all set, and you can immerse yourself with the problem.
If, however, you don't know what problem you want to solve: Do you really need to be creative? All problems are solved anyway, correct? In that case, just go on with your work. As you immerse yourself with your daily work, you will find a project naturally after a while. I don't believe you should artificially find a project. It should come naturally to you. You should have an interest in the problem domain and a strong desire to find a proper solution for the problem. Artificially created projects come with the catch that you might give up on it rather sooner than later due to lack of motivation and desire.
If you want to be creative in a field, you must know a lot about it. The more you know about it, the more dots you can connect. When you are learning a new technology or if you are thinking about a tough problem, do it thoroughly. Don't let anything distract you. Read books, watch lectures, listen to podcasts or audiobooks about the topic, talk to other people working on similar topics. Immerse yourself for multiple hours per day, multiple days per week, multiple weeks and maybe even months. Create your own inner universe.
But once a day is over, shut your thoughts down. Hit the off-switch. Stop thinking about this problem for the remainder of the day. This can be difficult, as you didn't solve the problem- or didn't understand everything of the new technology yet, and you really want to get to the point. But be strict to yourself and stop thinking about it for a while.
You must understand that you are more than just your conscious thoughts. Your brain does a lot of work in the background that you aren't aware of consciously. What happens when you stop consciously thinking about a problem is that your brain continues processing it. You might have experienced the "AHA"-effect, where suddenly you had an idea out of nowhere (e.g. during a walk, in the shower, or in the morning when you woke up)? This is your conscious self downloading a result from the background thread of your brain. You can elevate this effect by immersing with the problem immensely before giving your conscious self a break.
Sometimes, depending on how deeply you were immersed, you may need to let the problem go for a couple of days (e.g. over a weekend) before you can download a new insight.
Wherever you go, ensure that you always have something to take notes with you. Once you have an idea from nowhere (or from your unconscious but volatile brain), you really want to write it down to persistent storage. It doesn't matter what kind of note-taking device you use here. It can be a paper journal, or it can be your smartphone.
My advice is to have a separate section where you put your notes of all of your ideas. At home or in the office, I write everything in my paper journal. When I am not at home, I use a digital note-taking app on my phone. Later, I copy the digital notes from it into a project-specific section of my paper journal.
I prefer taking notes on paper, as it gives you more freedom of how to structure it. You can use any colour, and you can also quickly create diagrams without the use of any complex computer program.
I noticed while being sleep-deprived I am (obviously) unable to concentrate so much, and it is difficult to be immersed in a focused way. But on the other hand, I am a lot more creative compared to when I am not sleep-deprived. Then, my brain suddenly presents me with connections I have not thought of before. Here, I usually write any idea I have down on a sheet of paper or in my journal, so I can pick it up later. I then often continue to philosophise about a possible solution. Sometimes to the absurd, and sometimes to something pretty useful.
I am not saying that you should skip sleep. By all means, if you can sleep, then sleep. But there are some days when you don't manage to sleep (e.g. think too much about a project and didn't manage to hit the off switch). This is, where you can take advantage of your current state of mind. Disclaimer: Skipping sleep damages your health. So, please don't try this out on purpose. But in case you had a bad night, remember this trick.
Have regular breaks. Don't skip your lunch break. Best, have a walk during lunchtime. And after work, do some kind of workout or visit a sports class. Do something completely unrelated to work before going to sleep (e.g. visit a parallel universe and read a Science Fiction novel). In short: Totally hit the off-switch after your work for the day is finished. You will be much more energised and motivated the next time you open your work laptop.
I personally love to read Science Fiction novelsI skip breakfast and lunch during the week. This means that on average, I intermittent fast on average 18-20 hours daily. It may sound odd to most people (who don't intermittent fast), but in a fasted state, I can be even more focused, thus helping me immerse myself in something even more. Not having breakfast and lunch also gives me back some time for other things (e.g. a nice walk, where I listen to podcasts or audiobooks or practise using my camera (street photography)). I relax my routine during the week ends, where I may enjoy a meal at any given time of the day.
It also helps a lot eat healthy. Healthy food makes your brain work more efficiently. But I won't go into more details here, as nothing is as contradictory as the health and food industry. Conduct your own research. Your opinion may be different from mine anyway, and everyone's body reacts to certain foods differently. What for one person works may not work for another person. But be aware that you will find a lot of wrong and also conflicting information on the internet. So always use multiple resources for your research.
It's easy to fall into the habit of "boxed" thinking, but creativity is exactly the opposite. Once in a while, make yourself think "Is A really required to do B?". Many assumptions are believed to be true. But are they really? A concrete example: "At work we only use the programming language L and framework F" and therefore, it is the standard we must use.
Another way to think about it is "Is there an alternative way to accomplish the desired result? What if there were no programming language L and framework F? What would I do instead?". Maybe you would use programming language X to implement your own domain-specific language, which does what framework F would have done but in exactly the way you want to + much more flexible than F! And maybe language X would be much better suitable than L to implement a DSL anyway. Conclusion: It never hurts to verify your assumptions.
Often, you will also find solutions to problems you never intended to solve and find new problems you never imagined to actually exist. That might not be a bad thing, but it might sidetrack you on your path to finding a solution for a particular problem. So be careful not to get sidetracked too much. In this case, just save a note for later reference (maybe your next Pet Project?) somewhere and go on with your actual problem.
Don't be afraid to think about weird and unconventional solutions. Sometimes, the most unconventional solution is the best solution to a problem. Also, try to keep to the basics. The best solutions are KISS.
Keep it simple and stupidA small additional trick: you can train yourself to generate new and unconventional ideas. Just write down 20 random ideas every day. It doesn't matter what the ideas are about and whether they are useful or not. The purpose of this exercise is to make your brain think about something new and unconventional. These can be absurd ideas such as "Jump out of the window naked in the morning in order to wake up faster". Of course, you would never do that, but at least you had an idea and made your brain generate something.
Especially as a DevOps Engineer, you could be busy all the time with small, but frequent, ad hoc tasks. Don't lose yourself here. Yes, you should pay attention to your job and those tasks, but you should also make some room for creativity. Don't schedule meeting after ad hoc work after meeting after Jira ticket work after another Jira ticket. There should also be some "free" space in your calendar.
Use the "free" time to play around with your tech stack. Try out new options, explore the system metrics, explore new tools, etc. Dividends will pay off with new ideas, which you would have never come up with if you were "just busy" like a machine.
Sometimes, I pick a Unix manual page of a random command and start reading it. I have a bash helper function which will pick one for me:
❯ where learn
learn () {
man $(ls /bin /sbin /usr/bin /usr/sbin 2>/dev/null | shuf -n 1) |
sed -n "/^NAME/ { n;p;q }"
}
❯ learn
perltidy - a perl script indenter and reformatter
❯ learn
timedatectl - Control the system time and date
This all summarises advice I have, really. I hope this was interesting and helpful for you.
I have one more small tip: I never published a blog post the same day I wrote it. After finishing writing it, I always wait for a couple of days. In all cases so far, I had an additional idea to add or to fine tune the blog post.
Another article I found interesting and relevant is
Creative Paradise by The SephistRelevant books I can recommend are:
E-Mail your comments to hi@paul.cyou :-)
Back to the main site
,_---~~~~~----._
_,,_,*^____ _____``*g*\"*,
____ _____ _ _ / __/ /' ^. / \ ^@q f
| _ \_ _|_ _(_) | @f | @)) | | @)) l 0 _/
| | | || |/ _` | | | \`/ \~____ / __ \_____/ \
| |_| || | (_| | | | | _l__l_ I
|____/ |_|\__,_|_|_| } [______] I
] | | | |
] ~ ~ |
| |
| |
// Available log levels. const ( None level = iota Fatal level = iota Error level = iota Warn level = iota Info level = iota Default level = iota Verbose level = iota Debug level = iota Devel level = iota Trace level = iota All level = iota )
{
"Client": {
"TermColorsEnable": true,
"TermColors": {
"Remote": {
"DelimiterAttr": "Dim",
"DelimiterBg": "Blue",
"DelimiterFg": "Cyan",
"RemoteAttr": "Dim",
"RemoteBg": "Blue",
"RemoteFg": "White",
"CountAttr": "Dim",
"CountBg": "Blue",
"CountFg": "White",
"HostnameAttr": "Bold",
"HostnameBg": "Blue",
"HostnameFg": "White",
"IDAttr": "Dim",
"IDBg": "Blue",
"IDFg": "White",
"StatsOkAttr": "None",
"StatsOkBg": "Green",
"StatsOkFg": "Black",
"StatsWarnAttr": "None",
"StatsWarnBg": "Red",
"StatsWarnFg": "White",
"TextAttr": "None",
"TextBg": "Black",
"TextFg": "White"
},
"Client": {
"DelimiterAttr": "Dim",
"DelimiterBg": "Yellow",
"DelimiterFg": "Black",
"ClientAttr": "Dim",
"ClientBg": "Yellow",
"ClientFg": "Black",
"HostnameAttr": "Dim",
"HostnameBg": "Yellow",
"HostnameFg": "Black",
"TextAttr": "None",
"TextBg": "Black",
"TextFg": "White"
},
"Server": {
"DelimiterAttr": "AttrDim",
"DelimiterBg": "BgCyan",
"DelimiterFg": "FgBlack",
"ServerAttr": "AttrDim",
"ServerBg": "BgCyan",
"ServerFg": "FgBlack",
"HostnameAttr": "AttrBold",
"HostnameBg": "BgCyan",
"HostnameFg": "FgBlack",
"TextAttr": "AttrNone",
"TextBg": "BgBlack",
"TextFg": "FgWhite"
},
"Common": {
"SeverityErrorAttr": "AttrBold",
"SeverityErrorBg": "BgRed",
"SeverityErrorFg": "FgWhite",
"SeverityFatalAttr": "AttrBold",
"SeverityFatalBg": "BgMagenta",
"SeverityFatalFg": "FgWhite",
"SeverityWarnAttr": "AttrBold",
"SeverityWarnBg": "BgBlack",
"SeverityWarnFg": "FgWhite"
},
"MaprTable": {
"DataAttr": "AttrNone",
"DataBg": "BgBlue",
"DataFg": "FgWhite",
"DelimiterAttr": "AttrDim",
"DelimiterBg": "BgBlue",
"DelimiterFg": "FgWhite",
"HeaderAttr": "AttrBold",
"HeaderBg": "BgBlue",
"HeaderFg": "FgWhite",
"HeaderDelimiterAttr": "AttrDim",
"HeaderDelimiterBg": "BgBlue",
"HeaderDelimiterFg": "FgWhite",
"HeaderSortKeyAttr": "AttrUnderline",
"HeaderGroupKeyAttr": "AttrReverse",
"RawQueryAttr": "AttrDim",
"RawQueryBg": "BgBlack",
"RawQueryFg": "FgCyan"
}
}
},
...
}
jsonschema -i dtail.json schemas/dtail.schema.json
% dtail --files /var/log/foo.log
% dmap --files /var/log/foo.log --query 'from TABLE select .... outfile result.csv'
% dtail /var/log/foo.log
% dcat --plain /etc/passwd > /etc/test % diff /etc/test /etc/passwd # Same content, no diff
% dgrep --plain --regex 'somethingspecial' /var/log/foo.log |
dmap --query 'from TABLE select .... outfile result.csv'
% awk '.....' < /some/file | dtail ....
% cat check_dtail.sh #!/bin/sh exec /usr/local/bin/dtailhealth --server localhost:2222
% export DTAIL_INTEGRATION_TEST_RUN_MODE=yes
% make . . . % go clean -testcache % go test -race -v ./integrationtests
Published at 2022-02-04T09:58:22+00:00; Updated at 2022-02-18
/( )`
\ \___ / |
/- _ `-/ '
(/\/ \ \ /\
/ / | ` \
O O ) / |
`-^--'`< '
(_.) _ ) /
`.___/` /
`-----' /
<----. __ / __ \
<----|====O)))==) \) /====
<----' `--' `.__,' \
| |
\ /
______( (_ / \______
(FL) ,' ,-----' | \
`--{__________) \/ "Berkeley Unix Daemon"
This is a list of Operating Systems I currently use. This list is in no particular order and also will be updated over time. The very first operating system I used was MS-DOS (mainly for games) and the very first Unix like operating system I used was SuSE Linux 5.3. My first smartphone OS was Symbian on a clunky Sony Ericsson device.
Fedora Linux is the operating system I use on my primary (personal) laptop. It's a ThinkPad X1 Carbon Gen. 9. Lenovo which comes along with official Lenovo Linux support. I already noticed hardware firmware updates being installed directly through Fedora from Lenovo. Fedora is a real powerhouse, cutting-edge and reasonably stable at the same time. It's baked by Red Hat.
I also use Fedora on my Microsoft Surface Go 2 convertible tablet. Fedora works quite OK (and much better than Windows) on this device. It's also the perfect travel companion.
I use the GNOME Desktop on my Fedora boxes. I have memorized and customized a bunch of keyboard shortcuts. But the fact that I mostly work in the terminal (with tmux) makes the Desktop environment I use only secondary.
I installed EndeavourOS on my (older) ThinkPad X240 to try out an Arch based Linux distribution. I also could have installed plain Arch, but I don't see the point when there is EndeavourOS. EndeavourOS is as close as you can get to the plain Arch experience but with an easy installer. I am not saying that it's difficult to install plain Arch but it's, unless you are new to Linux and want to learn about the installation procedure, just waste of time in my humble opinion. Give Linux From Scratch a shot instead if you really want to learn about Linux.
https://www.linuxfromscratch.org/On EndeavourOS, I use the Xfce desktop environment which feels very snappy and fast on the X240 (which I purchased back in 2014). Usually, I have my X240 standing right next to my work laptop and use it for playing music (mainly online radio streams), for personal note taking and occasional emailing and instant messaging.
As this is a rolling Linux distribution there are a lot of software updates coming through every day. Sometimes, it only takes a minute until the next version of a package is available. Honestly, I find that a bit annoying to constantly catch up with all the updates. As for now I will live with it and/or automate it a bit more. It'll be OK if it breaks occasionally, as this is not my primary laptop anyway.
Arch Linux and EndeavourOS are community distributions. This means, that there is no big corporation in the backyard lurking around. They won't give you the firmware updates for cutting edge hardware out of the box, though, but they are still a very good choice for hobbyist and also for older hardware where future firmware updates are less likely to happen.
I am very happy with the package availability through the official repository and AUR.
https://endeavouros.com/I have run FreeBSD in many occasions. Right after SuSE Linux, FreeBSD (around 4.x) was the second open source system I used in my life on regular basis. I didn't even go to university yet then I started using it :-). Also, a former employer of mine even allowed me to install FreeBSD on my main workstation (which I actually did and used it for a couple of years).
I remember it used to be a pain bootstrapping Java for FreeBSD due to the lack of pre-compiled binary packages. You had first to enable the Linux compatibility layer, then install Linux Java, and then compile FreeBSD Java with the bootstrapped Linux Java (yes, Java is mainly programmed in C++, but for some reason compiling Java for FreeBSD also required an installation of Java). Nowadays, there are ready OpenJDK binary packages you could install. So things have improved a lot since.
FreeBSD always had a place somewhere in my life:
Debian GNU/kFreeBSD is now dead (same is my experiment)...
https://www.debian.org/ports/kfreebsd-gnu/...but I still have saved and old uname output :-):
[root@saturn /usr/jail/serv14/etc] # jexec 21 bash root@rhea:/ # uname -a GNU/kFreeBSD rhea.buetow.org 8.0-RELEASE-p5 FreeBSD 8.0-RELEASE-p5 #2: Sat Nov 27 13:10:09 CET 2010 root@saturn.buetow.org:/usr/obj/usr/srcs/freebsd.src8/src/sys/SERV10 x86 64 amd64 Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz GNU/kFreeBSD
Currently, I use FreeBSD on my personal NAS server. The server is a regular PC with a bunch of hard drives and a ZFS RAIDZ (with 4x2TB drives) + a couple of external backup drives.
https://www.FreeBSD.orgWhile CentOS 8 is already out of support, I still use CentOS 7 (which will receive security updates until 2024). CentOS 7 runs in a cloud VM and is the home to my personal NextCloud and Wallabag installations. You probably know already NextCloud. About Wallabag: It is a great free and open source alternative to Pocket (for reading articles from the web offline later). Yes, you can pay for a Wallabag subscription, but you can also host it for free on your own server.
NextCloudThe reason I use Linux and not *BSD at the moment for these services is Docker. With Docker, it's so easy-peasy to get these up and running. I will have to switch to another OS before CentOS 7 runs out of support, though. It might be CentOS Stream, Rocky Linux, or, more likely, I will use FreeBSD. On FreeBSD there isn't Docker, but what can be done is to create a self-contained Jail for each of the web-apps.
I have been using FreeBSD Jails for LAMP stacks before I started using CentOS. The reason why I switched to CentOS (it was still CentOS 6 at that time) in the first place was, that I wanted to try out something new.
https://www.centos.orgI use two small OpenBSD "cloud" boxes for my "public facing internet front-ends". The services I run here are:
OpenBSD is a complete operating system. I love it due to it's "simplicity" and "correctness" and the good documentation (I love the manual pages in particular). OpenBSD is also known for its innovations in security. I must admin, though, that most Unix like operating system would be secure enough for my personal needs and that I don't really need to use OpenBSD here. But nevertheless, I think it's the ideal operating system for what I am using it for.
The only softwares which were not part of the base system and I had to install additionally were the Gemini server (vger) and Git, which both were available as pre-compiled OpenBSD binary packages. So, besides of these two packages, it is indeed a pretty complete operating system for my use case.
https://www.openbsd.orgI have to use a MacBook Pro with macOS for work. What else can I say but that this would have never been my personal choice. At least macOS is a UNIX under the hood and comes with a decent terminal and there are plenty of terminal apps available via Brew. Some of the inner workings of macOS were actually forked from the FreeBSD project.
developer.apple.com: BSD in macOS/DarwinI find the macOS UI rather confusing.
At some point I got fed up with big tech, like Google and Samsung (or Apple, but personally I don't use Apple), spying on me. So I purchased a Google phone (a midrange Pixel phone) and installed LineageOS, a free and open source distribution of Android, on it. I don't have anything from Google installed on it (not even the play store, I install my apps from F-Droid). It's my daily driver since mid 2021 now.
So far the experience is not great but good. The main culprits are not having Google Maps, Google Gboard and the camera app. The latter lacks some features on LineageOS (e.g. No wide angle lens support). Also, I can't use my banking apps anymore. Sometimes apps crash for no apparent reason(s) but I get around it so far. I shouldn't spend so much time on my smartphone anyway! And the whole point of switching to LineageOS was to get away of big tech and therefore I should not complain :-). What I do like is that 95% the things I used to do on a proprietary mobile phone also can be done with LineageOS.
Read also "The Midle Way" section of this blog post regarding smartphones.There's also the excellent Termux app in the F-Droid store, which transforms the phone into a small Linux handheld device. I am able to run all of my Linux/Unix terminal apps with it.
https://lineageos.org/Unfortunatley, I still have to keep my proprietary Android phone around. Sometimes, I really need to use some proprietary apps which are only available form the Google play store and also require the Google services installed on the phone. I don't carry this phone around all the time and I only use it intentionally for very specific use cases. I think this is the best compromise I can make.
I have to use an iPhone for work. I like the hardware but I hate the OS (you can also call it spyOS), but it's the necessarries evil, unfortunately. Apple is even worse than Google here (despite claiming for themselves to produce the most secure phone(s)). I don't have it with me all the time or switched off when I don't need it. I also find iOS quite unintuitive to use.
Being on-call for work means to to be reachable 24/7. This implies that the phone is carried around all the time (in an switched-on state). 1984 is now.
https://en.wikipedia.org/wiki/Nineteen_Eighty-FourI use it on my PineTime smartwatch. Other than checking the time and my step count, I really don't do anything else fancy with it (yet).
https://www.pine64.org/pinetime/I usually install an army of RaspberryPi 3's in my house before I travel for a prolonged amount of time. All Pi's are equipped with an camera and have motionEyeOS (Linux based video surveillance system) installed. There's a neat Android app in the F-Droid store which let's me keep an eye on everything. I make the Pi's accessible from the internet via reverse SSH tunnels through one of my frontend servers.
https://github.com/ccrisan/motioneyeosI use a Kobo Forma as my e-reader device. I have started to switch off the Wifi and to only sideload DRM free ePubs on it. Even offline, it's a fully capable reader device. I wouldn't like the Kobo to call home to Rakuten. I would love to replace it one day with an open source e-reader alternative like the PineNote. There are also some interesting attempts installing postmarketOS Linux on Kobo devices. The latter boots already, but is far from being usable as a normal e-reader.
The PineNoteBut as a fall-back, someone could still use the good old dead tree format!
An Android TV box is used for watching movies and series on Netflix and Amazon Prime video (yes, I am human too and rely once in a while on big tech streaming services). The Android TV box is currently in the process of being replaced by OSMC, though. Most services seem to work fine with OSMC, but didn't get around tinkering with Netflix and Amazon there yet.
https://osmc.tv/This section is just for the sake of having a complete list of all OSes I used for some significant amount of time. I might not use all of them any more...
I have been using NetBSD on an old Sun Sparcstation 10 as a student. I also have run NetBSD on a very old ThinkPad with 96MB!!! of RAM (even with X/evilWM). I also installed (but never really used) NetBSD on an HP Jornada 680. But that's all more than 10 years ago. I haven't looked at NetBSD for long time. I want to revive it on an "old" ThinkPad T450 of mine which I currently don't use.
https://netbsd.orgE-Mail your comments to hi@paul.cyou :-)
Back to the main sitePublished at 2022-01-23T16:42:04+00:00
__
/ _| ___ ___ _______ _ __ ___
| |_ / _ \ / _ \ |_ / _ \| '_ \ / _ \
| _| (_) | (_) | / / (_) | | | | __/
|_| \___/ \___(_)___\___/|_| |_|\___|
I don't count this as a real blog post, but more of an announcement (I aim to write one real post once monthly). From now on, "foo.zone" is the new address of this site. All other addresses will still forward to it and eventually (based on the traffic still going through) will be deactivated.
As you can read on Wikipedia, "foo" is, alongside to "bar" and "baz", a metasyntactic variable (you know what I mean if you are a programmer or IT person):
https://en.wikipedia.org/wiki/Metasyntactic_variableIt's my personal internet site and blog. Everything you read on this site is my personal opinion and experience. It's not intended to be anything professional. If you want my professional background, then go to my LinkedIn profile.
Since I re-booted this blog last year, I struggled to find a good host name for it. I started off with "buetow.org", and later I switched halfway to "snonux.de". Buetow is my last name, and snonux relates to some of my internet nicknames and personal IT projects. I also have a "SnonuxBSD" ASCII-art banner in the motd of my FreeBSD based home-NAS.
For a while, I was thinking about a better host name for this site, meeting the following criteria:
So I think that foo.zone is the perfect match. It's a bit geeky, but so is this site. The meta-syntactic variable relates to computer science and programming, so does this site. Other than that, staying in this sphere, it's a pretty generic name.
I was pretty happy finding out that foo.zone was still available for registration. I stumbled across it just yesterday while I was playing around with my new authoritative DNS servers. I was actually quite surprised as usually such short SLDs (second level domains), especially "foo", are all taken already.
As a funny bit, I almost chose "foo.surf" over "foo.zone" as in "surfing this site", but then decided against it as I would have to tell everyone that I am not into water sports so much. Well, on the other hand, I now may have to explain to non-programmers that I am not a fan of the rock band "Foo Fighters". But that will be acceptable, as I don't expect "normal" people visiting the foo zone as much anyway. If you reached as far, I have to congratulate you. You are not a normal person.
The host buetow.org will stay. However, not as the primary address for this site. I will keep using it for my personal internet infrastructure as well as for most of my E-Mail addresses. I used buetow.org for that over the past 10 years already anyway and that won't change any time soon. I don't know what I am going to do with snonux.de in the long run. A .de SLD (for Germany) is pretty cheap, so I might just keep it for now.
E-Mail your comments to hi@paul.cyou :-)
Back to the main site
'\ '\ . . |>18>>
\ \ . ' . |
O>> O>> . 'o |
\ .\. .. . |
/\ . /\ . . |
/ / . / / .' . |
jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Art by Joan Stark, mod. by Paul Buetow
❯ ls -l /proc/self/fd/ total 0 lrwx------. 1 paul paul 64 Nov 23 09:46 0 -> /dev/pts/9 lrwx------. 1 paul paul 64 Nov 23 09:46 1 -> /dev/pts/9 lrwx------. 1 paul paul 64 Nov 23 09:46 2 -> /dev/pts/9 lr-x------. 1 paul paul 64 Nov 23 09:46 3 -> /proc/162912/fd
❯ echo Foo Foo ❯ echo Foo > /proc/self/fd/0 Foo
❯ echo Foo 1>&2 2>/dev/null Foo
❯ echo Foo 2>/dev/null 1>&2 ❯
❯ { echo Foo 1>&2; } 2>/dev/null
❯ ( echo Foo 1>&2; ) 2>/dev/null
❯ { { { echo Foo 1>&2; } 2>&1; } 1>&2; } 2>/dev/null
❯ ( ( ( echo Foo 1>&2; ) 2>&1; ) 1>&2; ) 2>/dev/null
❯
❯ lsof -a -p $$ -d0,1,2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 62676 paul 0u CHR 136,9 0t0 12 /dev/pts/9 bash 62676 paul 1u CHR 136,9 0t0 12 /dev/pts/9 bash 62676 paul 2u CHR 136,9 0t0 12 /dev/pts/9
❯ touch foo
❯ exec 3>foo # This opens fd 3 and binds it to file foo.
❯ ls -l /proc/self/fd/3
l-wx------. 1 paul paul 64 Nov 23 10:10 \
/proc/self/fd/3 -> /home/paul/foo
❯ cat foo
❯ echo Bratwurst >&3
❯ cat foo
Bratwurst
❯ exec 3>&- # This closes fd 3.
❯ echo Steak >&3
-bash: 3: Bad file descriptor
❯ cat grandmaster.sh #!/usr/bin/env bash # Write a file data-file containing two lines echo Learn You a Haskell > data-file echo for Great Good >> data-file # Link fd with fd 6 (saves default stdin) exec 6<&0 # Overwrite stdin with data-file exec < data-file # Read the first two lines from it declare LINE1 LINE2 read LINE1 read LINE2 # Print them echo First line: $LINE1 echo Second line: $LINE2 # Restore default stdin and delete fd 6 exec 0<&6 6<&-
❯ chmod 750 ./grandmaster.sh ❯ ./grandmaster.sh First line: Learn You a Haskell Second line: for Great Good
❯ cat <<END > Hello World > It’s $(date) > END Hello World It's Fri 26 Nov 08:46:52 GMT 2021
❯ <<END cat > Hello Universe > It’s $(date) > END Hello Universe It's Fri 26 Nov 08:47:32 GMT 2021
❯ declare VAR=foo ❯ if echo "$VAR" | grep -q foo; then > echo '$VAR ontains foo' > fi $VAR ontains foo
❯ if grep -q foo <<< "$VAR"; then > echo '$VAR contains foo' > fi $VAR contains foo
❯ grep -q foo <<< "$VAR" && echo '$VAR contains foo' $VAR contains foo
❯ if [[ "$VAR" =~ foo ]]; then echo yay; fi yay
❯ read a <<< ja
❯ echo $a
ja
❯ read b <<< 'NEIN!!!'
❯ echo $b
NEIN!!!
❯ dumdidumstring='Learn you a Golang for Great Good'
❯ read -a words <<< "$dumdidumstring"
❯ echo ${words[0]}
Learn
❯ echo ${words[3]}
Golang
❯ echo 'I like Perl too' > perllove.txt ❯ cat - perllove.txt <<< "$dumdidumstring" Learn you a Golang for Great Good I like Perl too
❯ echo $RANDOM 11811 ❯ echo $RANDOM 14997 ❯ echo $RANDOM 9104
❯ cat ./calc_answer_to_ultimate_question_in_life.sh
#!/usr/bin/env bash
declare -i MAX_DELAY=60
random_delay () {
local -i sleep_for=$((RANDOM % MAX_DELAY))
echo "Delaying script execution for $sleep_for seconds..."
sleep $sleep_for
echo 'Continuing script execution...'
}
main () {
random_delay
# From here, do the real work. Calculating the answer to
# the ultimate question can take billions of years....
: ....
}
main
❯
❯ ./calc_answer_to_ultimate_question_in_life.sh
Delaying script execution for 42 seconds...
Continuing script execution...
❯ set -x
❯ square () { local -i num=$1; echo $((num*num)); }
❯ num=11; echo "Square of $num is $(square $num)"
+ num=11
++ square 11
++ local -i num=11
++ echo 121
+ echo 'Square of 11 is 121'
Square of 11 is 121
❯ bash -x ./half_broken_script_to_be_debugged.sh
❯ bash -x ./grandmaster.sh + bash -x ./grandmaster.sh + echo Learn You a Haskell + echo for Great Good + exec + exec + declare LINE1 LINE2 + read LINE1 + read LINE2 + echo First line: Learn You a Haskell First line: Learn You a Haskell + echo Second line: for Great Good Second line: for Great Good + exec ❯
❯ help set | grep -- -e
-e Exit immediately if a command exits with a non-zero status.
❯ bash -c 'set -e; echo hello; grep -q bar <<< foo; echo bar' hello ❯ echo $? 1
❯ bash -c 'set -e; echo hello; grep -q bar <<< barman; echo bar' hello bar ❯ echo $? 0
❯ bash -c 'set -e > grep -q bar <<< foo > if [ $? -eq 0 ]; then > echo "matching" > else > echo "not matching" > fi' ❯ echo $? 1
❯ bash -c 'set -e > if grep -q bar <<< foo; then > echo "matching" > else > echo "not matching" > fi' not matching ❯ echo $? 0 ❯ bash -c 'set -e > if grep -q bar <<< barman; then > echo "matching" > else > echo "not matching" > fi' matching ❯ echo $? 0
❯ cat ./e.sh
#!/usr/bin/env bash
set -e
foo () {
local arg="$1"; shift
if [ -z "$arg" ]; then
arg='You!'
fi
echo "Hello $arg"
}
bar () {
# Temporally disable e
set +e
local arg="$1"; shift
# Enable e again.
set -e
if [ -z "$arg" ]; then
arg='You!'
fi
echo "Hello $arg"
}
# Will succeed
bar World
foo Universe
bar
# Will terminate the script
foo
❯ ./e.sh
Hello World
Hello Universe
Hello You!
❯ help set | grep pipefail -A 2
pipefail the return value of a pipeline is the status of
the last command to exit with a non-zero status,
or zero if no command exited with a non-zero status
❯ grep paul /etc/passwd | tr '[a-z]' '[A-Z]' PAUL:X:1000:1000:PAUL BUETOW:/HOME/PAUL:/BIN/BASH ❯ echo $? 0
❯ grep TheRock /etc/passwd ❯ echo $? 1 ❯ grep TheRock /etc/passwd | tr '[a-z]' '[A-Z]' ❯ echo $? 0
❯ set -o pipefail ❯ grep TheRock /etc/passwd | tr '[a-z]' '[A-Z]' ❯ echo $? 1
Published at 2021-12-26T12:02:02+00:00; Updated at 2022-01-12
)
) (( (
( )) )
) ) // (
_ ( __ ( ~->>
,-----' |__,_~~___<'__`)-~__--__-~->> <
| // : | -__ ~__ o)____)),__ - '> >- >
| // : |- \_ \ -\_\ -\ \ \ ~\_ \ ->> - , >>
| // : |_~_\ -\__\ \~'\ \ \, \__ . -<- >>
`-----._| ` -__`-- - ~~ -- ` --~> >
_/___\_ //)_`// | ||]
_____[_______]_[~~-_ (.L_/ ||
[____________________]' `\_,/'/
||| / ||| ,___,'./
||| \ |||,'______|
||| / /|| I==||
||| \ __/_|| __||__
-----||-/------`-._/||-o--o---o---
~~~~~'
Log4shell (CVE-2021-44228) made it clear, once again, that working in information technology is not an easy job (especially when you are a DevOps person). I thought it would be interesting to summarize a few techniques to help you to relax.
(PS: When I mean DevOps, I also mean Site Reliability Engineers and Sysadmins. I believe SRE, DevOps Engineer and Sysadmin are just synonym titles for the same job).
https://en.wikipedia.org/wiki/Log4ShellIt's important to set clear expectations. It can be difficult to guess what others expect or don't expect from you. If you know exactly what you are supposed to do, you can work towards a specific goal and don't worry about all the other noise so much.
However, if you are in a more senior position, it is expected from you to plan your tasks by yourself to a large degree and also be flexible, so you can react quickly to new situations (e.g. resolving incidents). Also, to a large degree, you have to prioritise your work by yourself. This can overthrow all of your plans. In extreme cases, it can help to share your plans with your team so that everyone is on the same page. Afterwards, be the execution machine. People are happy when they see that stuff gets done. Communicate clearly all critical work you do. This will capture all the technical debt there might be. It does not help in the long run if things are fixed in the background without any visibility.
Due to politeness, many people are not setting clear expectations. I personally may sound sometimes "too German" when setting expectations, but so far nobody complained, and I have even received positive feedback about it.
There are many temptations to get side-tracked by other projects and/or issues. It is important to set boundaries here. But always answer to all requests as nothing is more frustrating than asking a person and never getting any answer back. This is especially the case when everyone is working form home where people are using tools such as Slack and E-Mail for most of their communications.
If the request is urgent, and you have the capacity to help, probably you should help. If it's not urgent, maybe ask to pospone the request (e.g. ask to create a ticket, so that someone from your team can work on it later).
If the request is urgent, but you don't have the knowledge or the capacity to help, try to defer to a colleague who might be able to help. You could also provide some quick tips and hints, so that the requester can resolve the issue by himself. Make it transparent why you might not have the time right now, as this can help the person to review his own priorities or to escalate.
Never make or take an escalation personally. The only forms of escalation should be due to technical issues or lack of resources. An escalation then becomes like a math equation and does not need human resources involved. So de-facto, an escalation is nothing negative, but just a process people can follow to form decision-making. In a good company escalations tend to be an exception, though. Staff knows how to deal with the things by themselves without bothering management too much.
If times are very stressful, think that it could always be worse:
When working in a team, you may feel that you could get done things faster when you just did everything by yourself. This can be a bit frustrating at times, as you might need to work late hours and also might need to explain things over and over again to others. Also, you could be the one who needs to get things explained over and over again as you are not so familiar with the topic (yet). You will appreciate it if the other person is slowing down for you a bit.
Security is a team sport. So slow down and make sure that everyone is on track with the goals. You can go full-speed with your very own subtasks, though. Not everyone knows how to use all the tools so well like a full-time DevOps person. As a DevOps person, you are not a security expert, though. Security experts are different people in your company, but DevOps will be the main tribe deploying mitigations (following the security recommendations) and management will be the main tribe coordinating all the efforts.
So even if you think that you can do everything faster by your own, can you really? You probably don't know what you don't know about IT security. The more you know about it, the more you know about what you don't know.
Slowing down also helps to prevent errors. Don't rush your tasks, even if they are urgent. Try to be quick, but don't rush them. Maybe you are writing a script to mitigate a production issue. You could others peer review that script, for example. Their primary programming language may not be the same (e.g. Golang vs Perl), but they would understand the logic. Or ask another DevOps person from your company with good scripting skills review your mitigation, but he then may lack the domain knowledge of the software you are patching. So in either case, the review will take a bit longer as the reviewer might not be an expert in everything.
So relax, don't always expect immediate results. Set clear and reasonable timelines for the management about the mitigations. You are not a superhero who has to do everything by yourself. Sometimes, you will miss a deadline. But that will have good reasons. Don't rush to complete just to meet a deadline.
Read also "Defensive DevOps" about deploying mitigation scripts.Always keep that in mind. You can't solve all problems by your own. Maybe you could, but that would be a lot of additional stress (and this will reflect to your personal life). Also, Superman and Wonder Woman receive much higher salaries than you will ever do ;-).
I have been a superhero multiple times mitigating critical incidents, and I was proud about it in those moments. But actually, I am not proud looking at those retrospectively as for everything there should be other people around who should be able to resolve an incident. No company should rely on a single person, there must always be a substitute. You are not a superhero and as harsh as it sounds, everyone is replaceable. Every superhero can be replaced with another superhero. The only thing it takes to become a superhero is time to get to know the infrastructure and tools very well, paired with work dedication.
This doesn't mean, that you shouldn't try your best. But you don't need to try to be the superhero. Maybe someone else will be the superhero, but that's OK as long as it's not always the same person every time. Everyone can have a good day after all. If I could choose between being a superhero or having a good night sleep, I would probably prefer the sleep.
If you are a superhero, try to give away some of your superpowers, so that you can relax in the evening knowing that others (e.g. the current on-call engineers) know how to tackle things. Every member of the team needs to do DevOps (even the team managers, in my humble opinion). Some may be less experienced than others or have other expertises, but to counteract this you could document the recurring tasks so that they are easy to follow (which then later could be either automated away or, even better, fully fixed).
On the other side, if you are a DevOps person, try to sneak into other people's shoes too. For example, you might not be an expert in Java programming, but a lot of the infrastructure is programmed in Java. This is where usually the Software Developers and Engineers shine. But if you know how to read, debug and even extend Java code too (by learning from the Software Developer superheroes), then your will only benefit from it.
So you are not a superhero. Or, if you are a superhero, then all colleagues should be superheroes too.
In a perfect world, every member of a team comes along with the same strengths and skills. But in reality, everyone is different.
In order to distribute the troubleshooting skills across the team, you should not jump on every problem immediately. Leave some space for others to resolve the issue. This is where the best learning happens. Nobody will learn from you when you solve all problems. People might learn something after you explained what you did, but the takeaways will be minimal compared to when people try to resolve issues by themselves. Always be available for questions which will help your colleagues to steer into the right direction and if you think it helps, give them some tips resolving the issue, even if they didn't ask for it. Sometimes, engineers are too proud to ask.
The whole paragraph changes when there is an issue you don't know how to resolve. Jump on it, so you can learn from it. But also ask for advice if you are unsure about it.
If the issue is a very critical one, then you might better off trying to resolve it as fast as possible with your full powers in order to avoid any major damage to the company. This, of course, only works if you know how to resolve it quickly. So, don't leave others with not much experience yet looking at it. If possible, work with the team to resolve the issue. Unfortunately, solving it with the team is not always the fastest way. So in this particular circumstance, the company may be better off being saved by a single superhero. Make sure that the problem will not occur again or, at least, that others can fix it the next time without Superman flying by.
Be strict about your time off. Nowadays, tech workers check their messages also out of office hours and are reachable 24/7. This really should only be the case when you are on-call, to be honest (or if you work for a startup). All other out-of-office time is owned by you and not your employer. You have signed an 40 hour/week and not 7 days/week contract. Of course, there will be always some sort of flexibility and exceptions. You might need to work over the weekend to get a migration done or a problem solved. But to balance it out, you should have other days off as substitutes.
It's important to shut down your brain from work during your breaks (be strict with your breaks, leave your desk for lunch or for a walk early afternoon and if you aren't on-call also don't take your work-phone with you). You will be happier and also much more energized and productive in the afternoon. Also, when you are reachable 24/7, your colleagues will start thinking that you don't have anything more important to do than work.
It does not matter how many tasks are in your backlog or how many issues are to be tackled. *Always* find time for personal advance. The most issues aren't critical anyway and can wait a bit. At the end of the day, you will have a nice feeling that you have accomplished something meaningful. This can be an interesting project or learning a new technology you are interested in. Of course, there must be consensus with your manager (unless you do that kind of thing in your personal time of course).
If you are too busy at work and just can't block time, then maybe it's time to think about alternatives. But before you do that, probably there is something else you can do. Perhaps you just think you can't block time, but you would be positively surprised to hear from your manager that he will fully support you. Of course, he won't agree to you working full-time on your pet projects. But a certain portion of your time should be allocated for personal advance. After all, your employer also want's you to stay happy so that you don't look for alternatives. It's of everyone's interest that you like your job and stay motivated. The more you are motivated, the more productive you are. The more productive you are, the more valuable you are for the company.
Another blog post worth reading:
https://unixsheikh.com/articles/how-to-stay-sane-in-todays-world-of-tech.htmlE-Mail your comments to hi@paul.cyou :-)
Back to the main site
'\ . . |>18>>
\ . ' . |
O>> . 'o |
\ . |
/\ . |
/ / .' |
jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Art by Joan Stark
❯ cat < /dev/tcp/time.nist.gov/13 59536 21-11-18 08:09:16 00 0 0 153.6 UTC(NIST) *
❯ exec 5<>/dev/tcp/google.de/80 ❯ echo -e "GET / HTTP/1.1\nhost: google.de\n\n" >&5 ❯ cat <&5 | head HTTP/1.1 301 Moved Permanently Location: http://www.google.de/ Content-Type: text/html; charset=UTF-8 Date: Thu, 18 Nov 2021 08:27:18 GMT Expires: Sat, 18 Dec 2021 08:27:18 GMT Cache-Control: public, max-age=2592000 Server: gws Content-Length: 218 X-XSS-Protection: 0 X-Frame-Options: SAMEORIGIN
❯ uptime # Without process substitution 10:58:03 up 4 days, 22:08, 1 user, load average: 0.16, 0.34, 0.41 ❯ cat <(uptime) # With process substitution 10:58:16 up 4 days, 22:08, 1 user, load average: 0.14, 0.33, 0.41 ❯ stat <(uptime) File: /dev/fd/63 -> pipe:[468130] Size: 64 Blocks: 0 IO Block: 1024 symbolic link Device: 16h/22d Inode: 468137 Links: 1 Access: (0500/lr-x------) Uid: ( 1001/ paul) Gid: ( 1001/ paul) Context: unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 Access: 2021-11-20 10:59:31.482411961 +0000 Modify: 2021-11-20 10:59:31.482411961 +0000 Change: 2021-11-20 10:59:31.482411961 +0000 Birth: -
❯ echo a > /tmp/file-a.txt ❯ echo b >> /tmp/file-a.txt ❯ echo c >> /tmp/file-a.txt ❯ echo b > /tmp/file-b.txt ❯ echo a >> /tmp/file-b.txt ❯ echo c >> /tmp/file-b.txt ❯ echo X >> /tmp/file-b.txt ❯ diff -u <(sort /tmp/file-a.txt) <(sort /tmp/file-b.txt) --- /dev/fd/63 2021-11-20 11:05:03.667713554 +0000 +++ /dev/fd/62 2021-11-20 11:05:03.667713554 +0000 @@ -1,3 +1,4 @@ a b c +X ❯ echo X >> /tmp/file-a.txt # Now, both files have the same content again. ❯ diff -u <(sort /tmp/file-a.txt) <(sort /tmp/file-b.txt) ❯
❯ diff -u <(ls ./dir1/ | sort) <(ls ./dir2/ | sort)
❯ wc -l <(ls /tmp/) /etc/passwd <(env)
24 /dev/fd/63
49 /etc/passwd
24 /dev/fd/62
97 total
❯
❯ while read foo; do
> echo $foo
> done < <(echo foo bar baz)
foo bar baz
❯
❯ tar cjf file.tar.bz2 foo ❯ tar cjf >(bzip2 -c > file.tar.bz2) foo
❯ { ls /tmp; cat /etc/passwd; env; } | wc -l
97
❯ ( ls /tmp; cat /etc/passwd; env; ) | wc -l
97
❯ echo $$
62676
❯ { echo $$; }
62676
❯ ( echo $$; )
62676
❯ ( env; ls ) | wc -l
27
❯ { env; ls } | wc -l
>
> ^C
(list) list is executed in a subshell environment (see COMMAND EXECUTION ENVIRONMENT
below). Variable assignments and builtin commands that affect the shell's
environment do not remain in effect after the command completes. The return
status is the exit status of list.
{ list; }
list is simply executed in the current shell environment. list must be ter‐
minated with a newline or semicolon. This is known as a group command. The
return status is the exit status of list. Note that unlike the metacharac‐
ters ( and ), { and } are reserved words and must occur where a reserved word
is permitted to be recognized. Since they do not cause a word break, they
must be separated from list by whitespace or another shell metacharacter.
$ Expands to the process ID of the shell. In a () subshell, it expands to the
process ID of the current shell, not the subshell.
❯ echo $BASHPID; { echo $BASHPID; }; ( echo $BASHPID; )
1028465
1028465
1028739
❯ echo {0..5}
0 1 2 3 4 5
❯ for i in {0..5}; do echo $i; done
0
1
2
3
4
5
❯ echo {00..05}
00 01 02 03 04 05
❯ echo {000..005}
000 001 002 003 004 005
❯ echo {201..205}
201 202 203 204 205
❯ echo {a..e}
a b c d e
❯ echo \"{These,words,are,quoted}\"
"These" "words" "are" "quoted"
❯ echo {one,two}\:{A,B,C}
one:A one:B one:C two:A two:B two:C
❯ echo \"{one,two}\:{A,B,C}\"
"one:A" "one:B" "one:C" "two:A" "two:B" "two:C"
❯ echo Linux-{one,two,three}\:{A,B,C}-FreeBSD
Linux-one:A-FreeBSD Linux-one:B-FreeBSD Linux-one:C-FreeBSD Linux-two:A-FreeBSD Linux-two:B-FreeBSD Linux-two:C-FreeBSD Linux-three:A-FreeBSD Linux-three:B-FreeBSD Linux-three:C-FreeBSD
❯ echo Hello world Hello world ❯ echo Hello world | cat - Hello world ❯ cat - <<ONECHEESEBURGERPLEASE Hello world ONECHEESEBURGERPLEASE Hello world ❯ cat - <<< 'Hello world' Hello world
❯ tar -czf - /some/dir | ssh hercules@buetow.org tar -xzvf -
$ head -n 1 grandmaster.sh #!/usr/bin/env bash $ file - < <(head -n 1 grandmaster.sh) /dev/stdin: a /usr/bin/env bash script, ASCII text executable
$ cat - hello hello ^C $ file - #!/usr/bin/perl /dev/stdin: Perl script text executable
❯ cat foo.sh
#/usr/bin/env bash
declare -r USER=${USER:?Missing the username}
declare -r PASS=${PASS:?Missing the secret password for $USER}
echo $USER:$PASS
❯ chmod +x foo.sh ❯ ./foo.sh ./foo.sh: line 3: USER: Missing the username ❯ USER=paul ./foo.sh ./foo.sh: line 4: PASS: Missing the secret password for paul ❯ echo $? 1 ❯ USER=paul PASS=secret ./foo.sh paul:secret
❯ VARIABLE1=value1 VARIABLE2=value2 ./script.sh
❯ export VARIABLE1=value1 ❯ export VARIABLE2=value2 ❯ ./script.sh
❯ help :
:: :
Null command.
No effect; the command does nothing.
Exit Status:
Always succeeds.
❯ : ❯ echo $? 0
❯ while : ; do date; sleep 1; done Sun 21 Nov 12:08:31 GMT 2021 Sun 21 Nov 12:08:32 GMT 2021 Sun 21 Nov 12:08:33 GMT 2021 ^C ❯
❯ foo () { }
-bash: syntax error near unexpected token `}'
❯ foo () { :; }
❯ foo
❯
❯ if foo; then :; else echo bar; fi
❯ : I am a comment and have no other effect
❯ : I am a comment and result in a syntax error ()
-bash: syntax error near unexpected token `('
❯ : "I am a comment and don't result in a syntax error ()"
❯
❯ declare i=0 ❯ $[ i = i + 1 ] bash: 1: command not found... ❯ : $[ i = i + 1 ] ❯ : $[ i = i + 1 ] ❯ : $[ i = i + 1 ] ❯ echo $i 4
❯ declare j=0 ❯ let j=$((j + 1)) ❯ let j=$((j + 1)) ❯ let j=$((j + 1)) ❯ let j=$((j + 1)) ❯ echo $j 4
❯ bash -c 'echo $(( 1/10 ))' 0 ❯ zsh -c 'echo $(( 1/10 ))' 0 ❯ bash -c 'echo $(( 1/10.0 ))' bash: line 1: 1/10.0 : syntax error: invalid arithmetic operator (error token is ".0 ") ❯ zsh -c 'echo $(( 1/10.0 ))' 0.10000000000000001 ❯
❯ bc <<< 'scale=2; 1/10' .10
Published at 2021-10-22T10:02:46+03:00
c=====e
H
____________ _,,_H__
(__((__((___() //| |
(__((__((___()()_____________________________________// |ACME |
(__((__((___()()()------------------------------------' |_____|
ASCII Art by Clyde Watson
I have seen many different setups and infrastructures during my carreer. My roles always included front-line ad-hoc fire fighting production issues. This often involves identifying and fixing these under time pressure, without the comfort of 2-week-long SCRUM sprints and without an exhaustive QA process. I also wrote a lot of code (Bash, Ruby, Perl, Go, and a little Java), and I followed the typical software development process, but that did not always apply to critical production issues.
Unfortunately, no system is 100% reliable, and you can never be prepared for a subset of the possible problem-space. IT infrastructures can be complex. Not even mentioning Kubernetes yet, a Microservice-based infrastructure can complicate things even further. You can take care of 99% of all potential problems by following all DevOps best practices. Those best practices are not the subject of this blog post; this post is about the sub 1% of the issues arising from nowhere you can't be prepared for.
Is there a software bug in a production, even though the software passed QA (after all, it is challenging to reproduce production behaviour in an artificial testing environment) and the software didn't show any issues running in production until a special case came up just now after it got deployed to production a week ago? Are there multiple hardware failure happening which causes loss of service redundancy or data inaccessibility? Is the automation of external customers connected to our infrastructure putting unexpectedly extra pressure on your grid, driving higher latencies and putting the SLAs at risk? You bet the solution is: Sysadmins, SREs and DevOps Engineers to the rescue.
You agree that fixing production issues this way is not proactive but rather reactive. I prefer to call it defensive, though, as you "defend" your system against a production issue. But, at the same time, you have to take a cautious (defensive) approach to fix it, as you don't want to make things worse.
Over time, I have compiled a list of fire-fighting automation strategies, which I would like to share here.
Defensive DevOps is a term I invented by myself. I define it this way:
That sounds a bit crazy, but this is, unfortunately, in rare occasions the reality. As the question is not whether production issues will happen, the question is WHEN they will happen. Every large provider, such as Google, Netflix, and so on, suffered significant outages before, and I firmly believe that their engineers know what they are doing. But you can prepare for the unexpected only to a certain degree.
Do you have to solve problem X? The best solution would be to fully automate it away, correct? No, the best way is to fix problem X manually first. Does the problem appear on one server or on thousand servers? The scale does not matter here. The point is that you should fix the problem at least once manually, so you understand the problem and how to solve it before implementing automation around it.
You should also have a short meeting with your team. Every person may has a different perspective and can give valuable input for determining the best strategy. But, again, keep the session short and efficient. Focus on the facts. After all, you are the domain expert and you probably know what you are doing.
Once you understand the problem, fix it on a different server again. This time maybe write a small program or script. Semi-automate the process, but don't fully automate it yet. Start the semi-automated solution manually on a couple of more servers and observe the result. You want to gain more confidence that this really solved the problem. This can take a couple of hours manually running it over and over again. During that process, you will improve your script iteratively.
You have to develop code directly on a production system. This sounds a bit controversial, but you want to get a working solution ASAP, and there is a very high chance that you can't reproduce problem X in a development or QA environment. Or at least it will consume significant effort and time to reproduce the problem, and by the time your code is ready, it's already too late. So the most practical solution is to directly develop your solution against a production system with the problem at hand.
You might not have your full-featured IDE available on a production system, but a text editor, such as Vim (or Neovim), is sufficient for writing scripts. Some editors allow you to edit files remotely. With Vim you can accomplish it with "vim scp://SERVER///path/to/file.sh". Every time you save the file, it will be automatically uploaded via SCP to the server. From there, you can execute it directly. This comes with the additional benefits of still having access to all the Vim plugins installed locally, which you might not have installed on any production machines. This approach also removes any network delays you might experience when running your editor directly on a remote machine.
Unfortunately, it will be a bit more complicated when you rely on code reviews (e.g. in a FIPS environment). Pair-programming could be the solution here.
You want to triple-check that your script is not damaging your system even further. You might introduce a bug to the code, so there should always be a way to roll back any permanent change it causes. You have to program it in a defensive style:
Furthermore, when you write Bash script, always run the tool ShellSheck (https://shellshock.io/) on it. This helps to catch many potential issues before applying it in production.
You probably won't have time for writing unit tests. But what you can do is to pedantically test your code manually. But you have to do the testing on a production machine. So how can you test your code in production without causing more damage?
Your script should be idempotent. This means you can run it infinite times in a row, and you will always get the same result. For example, in the first run of the script, a file A get's renamed to A.backup. The second time you run the script, it attempts to do the same, but it recognises that A has already been renamed to A.backup and then it is skipping that step. This is very helpful for manually testing, as it means that you can re-run the script every time you extended it. You should dry-run the script at least once before running it for real. You can apply the same principle for almost all features you add to the code.
You may also want to inject manual negative testing into your script. For example, you want to run a particular function F in your script but only if a certain pre-condition is met, and you want to ensure that the code branching works as expected. The pre-condition check could be pretty complex (e.g. N log messages containing a specific warning string are found in the applications logs, but only on the cluster leader server). You can flip the switch directly in the code manually (e.g. run F only, when the pre-condition isn't met) and then perform a dry run of the script and study the output. Once done, flip the switch back to its correct configuration. For double insurance, test the same on a different server type (e.g. on a follower and not on a leader system).
By following these principles, you test every line of code while you are developing on it.
At one point, you will be tired of manually running your script and also confident enough to automate it. You could deploy it with a configuration management system such as puppet Puppet and schedule a periodic execution via cron, a systemd timer or even a separate background daemon process. You have to be extremely careful here. The more you automate, the more damage you can cause. You don't want to automate it on all servers involved at once, but you want to slowly ramp up the automation.
First, automate it only on one single server and monitor the result closely. At first, only automate running the script in dry mode. Also, don't forget that you still should log everything that the script is doing. Once everything looks fine, you can automate the script on the canary server for real. It shouldn't be a disaster if something goes wrong as usually systems are designed in a HA fashion, where the same data is still at least on another server available. In the worst-case scenario, you could recover data from there or from the local backup files your script created.
Now, you can add a handful more canary servers to the automation. You should keep close attention to what the automation is doing. You could use a tool like DTail for distributed log file following. At this point, you could also think of deploying a monitoring check (e.g. Icinga) to see whether your script is not terminating abnormally or logging warnings or errors.
DTail - The distributed log tail programFrom there, you could automate the solution on more and more servers. Best, ramp up the automation to a handful of systems, and later to a whole line of servers (e.g. all secondary servers of a given cluster). And afterwards, automate it on all servers.
Remember, whenever something goes wrong, you will have plenty of logs and backup files available. The disaster recovery would involve extending your script to take care of that too or writing a new script for rolling back the backups.
If possible, don't deploy any automation shortly before out of office hours, such as in the evening, before holidays or weekends. The only exception would be that you, or someone else, will be available to monitor the automation out of office hours. If it is a critical issue, someone, for example, the on-call person, could take over. Or ask your boss to work now but to take off another day to compensate.
You should add an easy off-switch to your automation so that everyone from your team knows how to pause it if something goes wrong in order to adjust the automation accordingly. Of course, you should still follow all the principles mentioned in this blog post when making any changes.
For every major incident, you need to follow up with an incident retrospective. A blame-free, detailed description of exactly what went wrong to cause the incident, along with a list of steps to take to prevent a similar incident from occurring again in the future.
This usually means creating one or more tickets, which will be dealt with soon. Once the permanent fix is deployed, you can remove your ad-hoc automation and monitoring around it and focus on your regular work again.
E-Mail your comments to hi@paul.cyou :-)
Back to the main site
_______________ |*\_/*|_______
| ___________ | .-. .-. ||_/-\_|______ |
| | | | .****. .****. | | | |
| | 0 0 | | .*****.*****. | | 0 0 | |
| | - | | .*********. | | - | |
| | \___/ | | .*******. | | \___/ | |
| |___ ___| | .*****. | |___________| |
|_____|\_/|_____| .***. |_______________|
_|__|/ \|_|_.............*.............._|________|_
/ ********** \ / ********** \
/ ************ \ / ************ \
-------------------- --------------------
Published at 2021-08-01T10:37:58+03:00; Updated at 2023-01-23
__
_____....--' .'
___...---'._ o -`(
___...---' \ .--. `\
___...---' | \ \ `|
| |o o | | |
| \___'.-`. '.
| | `---'
'^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^' LGB - Art by lgbearrd
I believe that it is essential to always have free and open-source alternatives to any kind of closed-source proprietary software available to choose from. But there are a couple of points you need to take into consideration.
One benefit of using open-source software is that it doesn't cost anything, right? That's correct in many cases. However, in some cases you still need to spend a significant amount of time configuring the software to work for you. It will be more expensive to use open-source software than proprietary commercial one if you aren't careful.
Not to say that I haven't seen the same effect with commercial software where people had to, after buying it, put a bunch of effort to make it work due to the lack of quality or due to high complexity. But that's either bad luck or bad decision-making. Most commercial providers I have worked with try to make it work for you, so you also will buy other products and services from them later on and don't lose you as a happy customer.
Producers of commercial software want to earn money after all. This is to grow their businesses and also to be able to pay their employees, who also need to care for their families. Employees build up their careers, build houses, and are proud of their accomplishments in the company.
So per se, commercial software is not a bad thing. Right? At least, commercial closed-source software is not a bad thing in its heart. Unfortunately, some companies have to keep their software closed-source to not lose their competitive edge over other competitors.
There are also companies that earn on open-source software. All the code they write is free for download and use, but you, as a customer, could pay for service and support if you are not an expert and can't manage it by yourself.
I like this approach, as you can balance the effort and costs the way it suits you best, and in doubt, you can audit the source code. Are you already an expert? Perfect, you don't need to buy additional support for the software. Everything can be set up by yourself, given that you have the time and priority.
Also, once an open-source project reached a certain size, it is unlikely to be abandoned one day. As long as at least one person is willing to be the open-source maintainer, the project won't die. Whereas commercial providers can decide from today to tomorrow to retire software or go bankrupt (unless you purchase Microsoft Word, I don't believe it will die anytime soon).
Besides corporations, millions of individual open-source contributors write free and open-source software not for money but for pleasure. Often, they are organized in non-profit organizations, working together to reach a common goal (it is worth mentioning that there are also many professionals, payed by large corporations, working full-time for non-profit open-source projects in order to push the features and reach the goals of the corporations). Sometimes, people don't agree on the project goal, so it gets forked, which can be a good thing. The more diversity, the better, as this is where competition and innovation happens. Also, the end user will end up with more choices.
These open-source projects are of a very high quality standard and are rock-solid, if not better, alternatives to proprietary counterparts. If the project isn't backed by a large corporation already, you should donate to these open-source organizations and/or individual contributors. I have donated to some projects I use personally. Do you learn a foreign language and use Anki flashcards? It's entirely free and open-source, and they happily accept donations ensuring future maintenance and development.
Looking at the smaller, lesser-known open-source projects (not talking about established open-source projects like FreeBSD and Linux): You can't, however, expect the software to be perfect and bug-free. After all, most of the code is written for pleasure and fun in the developers' free time. Besides the developer himself, you might be the only user of the project. The software may be a bit clunky to use, and probably bugs are lurking around, and it might only work for a very specific use case.
Clunkiness can be charmful, though. And it can also encourage you to contribute code to make it better. There is a lot of such code in personal GitHub and GitLab repositories. The quality of such small open-source projects varies drastically. Many hobbyist programmers see programming as an art and put tons of effort into their projects. Others upload broken crap, which is dangerous to use. So have a look at the code before you use it!
One of the main conceptions about open-source software is that it is more secure than closed-source software because everybody can read and fix the code. Is that actually true? You can only be sure when you audit the code by yourself. If you are like me, you won't have time to audit all the open-source software you use. It's impossible to audit more than 100 million lines of Linux kernel code. Static code analysis tools come in handy here, but they still require humans to look at the results.
Security bugs in open-source projects are exposed to the public and fixed quickly, while we don't know exactly what happens to security bugs in closed-source ones. Still, hackers and security specialists can find them through reverse engineering and penetration testing. Overall, thinking of security, In my opinion it is still better to prefer open-source software because the more significant the project, the higher the probability that security bugs are found and fixed as more parties are looking into it. Furthermore, provided you have the necessary resources, you could still deduct an audit by yourself. The latter especially happens when companies with its own security and penetration testing departments are evaluating the use of open-source. This is something not every company can afford though.
Do you need Microsoft Word? Why don't you just use the Vim text editor or GNU Emacs to write your letters? If that's too nerdy, you can still use open-source alternatives such as AbiWord or LibreOffice. Larger organizations have the tendency to standardize the software their employees have to use. Unfortunately, as Microsoft Word is the de-facto standard text processing program, most companies prefer Word over LibreOffice. Same with Microsoft Excel vs LibreOffice Calc or other spreadsheet alternatives like Gnumeric. I don't know why that is; please....
E-Mail your comments to hi@paul.cyou :-)
I only use free and open-source operating systems on my personal Laptops, Desktop PCs and servers (FreeBSD and Linux based ones). Most of the programs and apps I use on them are free and open-source as well, and I am comfortable with it for over twenty years. Exceptions are the BIOSes and some firmwares of my devices. I also use Skype as most of my friends and family are using it. They are, unfortunately, proprietary software still. But I will be looking into Matrix as a Skype alternative when I have time. There are also open BIOS alternatives, but they usually don't work on my devices.
Update 2023-01-21: Check out my newer post about GrapheneOS, which solves some of my dilemmas
Why GrapheneOS RoxI struggle to go 100% open-source on my Smartphone. I use a Samsung phone with the stock Android as provided by Samsung. I love the device as it is large enough to use as a portable reading and note-taking device, and it can also take decent pictures. As a cloud backup solution, I have my own NextCloud server (open-source). Android is mainly open-source software, but many closed parts are still included. I replaced most of the standard apps with free and open-source variants from the F-Droid store though.
I could get a LineageOS based phone to get rid of the proprietary Android parts (I tried that out a couple of times in the past). But then a couple of convenient apps, such as Google Maps or Banking or Skype or the E-Ticket apps of various Airlines, various review apps when searching for restaurants, Audible (I think Audible offers an excellent service), etc., won't work anymore. The proprietary Google Maps is still the best maps app, even though there are open alternatives available. It's not that I couldn't live without these apps, but they make life a lot more convenient.
Thinking about alternative solutions is always a good idea. My advice is never to be entirely dependant on any proprietary software. Before you decide to use proprietary software, try to find alternatives in the open-source world. You might need to invest some time playing around with the options available. Maybe they are good enough for you, or maybe not.
If you still want to use proprietary software, use it with caution. Have a look at the recent change at Google Photos: For a long time, "high quality" photos could be uploaded there quota-less for free. However, Google recently changed the model so that people exceeding a quota have to start paying for the extra space consumed. I am not against Google's decision, but it shows you that a provider can always change its direction. So you can't entirely rely on these. I repeat myself: Don't fully rely on anything proprietary, but you might still use proprietary software or services for your own convenience.
The biggest problem I have with going 100% open-source is actually time. You can't control all the software you use or might be using in the future. You have only a finite amount of time available in your life. So you have to decide what's more important: Investigate and use an open-source alternative of every program and app you have installed, or rather spend quality time with your family and have a nice walk in the park or go to a sports class or cook a nice meal? You can't control it all in today's world of tech, not as a user and even not as a tech worker. There's a great blog post worth reading:
https://unixsheikh.com/articles/how-to-stay-sane-in-todays-world-of-tech.htmlRegarding my personal Smartphone dilemma: I guess the middle way is to use two phones:
I have been playing with other smartphone OS alternatives, especially with MeeGo (which has died already) and SailfishOS, too. Security and privacy seem to be significantly improved compared to an Android. As a matter of fact, I bought a cheap and used Sony Xperia XA2 last year and installed SailfishOS on it. It's a nice toy, but it's still not the holy open-source grail as there are also proprietary parts in SailfishOS. Platforms such as Mobian, Ubuntu Touch and Plasma Mobile are more compelling to me. People must explore alternatives to Android and Apple here, as otherwise, you won't own any gadgets anymore:
https://news.slashdot.org/story/21/07/10/0120236/by-2030-you-wont-own-any-gadgetsAnyhow, any gadgets, including your phone, should be a tool you use. Don't let the phone use you!
Be aware that it might be to your disadvantage if you manage to go completely under cover without anyone collecting data from you. Suppose you are a nobody on the web (no social media profiles, no tracking history, etc.). In that case, you aren't behaving like the mass, and therefore you are suspicious. So it might be even a good thing to leave your marks here and there once in a while. You aren't hiding anything anyway, correct? Just be mindful what you are sharing about yourself. I share personal things very rarely on Facebook for example. And I only share a small subset of my personal life on my personal homepage and this blog and on all of my social media accounts. Nobody is interested in what I have for breakfast anyway I guess. Write me an E-Mail if you are interested in what I am having for breakfast.
You might have noticed that I wrote a lot about Smartphones in this article. The reason is that free and open-source software for Smartphones is still evolving. In contrast, for Laptops and Desktop PCs, it's already there. There is no reason to use proprietary operating systems such as Windows or macOS on your computers unless your employer forces you to use one of these. Why would they force you? It has to do with standardization again. The IT department only can manage so many platforms. It wouldn't be manageable by IT if every employee would install their own Linux distribution or one of the *BSDs. That might work for small startups but not for larger companies, especially not for a security-focused companies.
I would love a standardized Linux at work, though. Dell and Lenovo also officially support Linux on their notebooks. The culprit may be knowledgeable IT staff maintaining and giving support to the Desktop Linux users. Not all colleagues are Linux geeks like you and me. I am using macOS for work, but I am not an Apple expert. Occasionally I have to contact IT support regarding some issues I have. I don't use the macOS GUI a lot; I mainly live in the terminal so I can run the same tools I also use on Linux.
Should you be pedantic about open-source software? It depends. It depends on your fundamental values and how much time you are ready to invest. Open-source software is not just free as in money, but also free as in freedom. You will gain back complete control of your personal data. Unfortunately, installing ready proprietary apps from the Play Store is much more convenient than building up a trustworthy open-source-based infrastructure by yourself. As a guideline, use proprietary software and services with caution. Be mindful about your choices and where you leave your digital fingerprints. In doubt, think less is more. Do you really need this new shiny app? What benefit does it provide to you? Probably you don't really need that shiny new app.
You have better chances when you know how to manage your own server and install and manage alternatives to the big cloud providers by yourself. I have the advantage that I have work experience as a Linux Systems Administrator here. I mentioned NextCloud already. I use NextCloud for online photo and file storage, contact and calendar sync and as an RSS news feed server. You could do the same with your own E-Mail server, you can also host your own website and blog. I also mentioned Matrix as a Skype alternative (which could also be an alternative to WhatsApp, Skype, Telegram, Viber, ...). I don't know a lot about Matrix yet, but it seems to be a very neat alternative. I am ready to invest time in it as one of my future personal pet projects. Not only because I think it's better, but also because for fun and as a hobby. But this doesn't mean that I invest *all* of my personal free time in it.
E-Mail your comments to hi@paul.cyou :-)
Back to the main sitePublished at 2021-07-04T10:51:23+01:00
When I was a Linux System Administrator, I have been programming in Perl for years. I still maintain some personal Perl programming projects (e.g. Xerl, guprecords, Loadbars). After switching jobs a couple of years ago (becoming a Site Reliability Engineer), I found Ruby (and some Python) widely used there. As I wanted to do something new, I decided to give Ruby a go.
You should learn or try out one new programming language once yearly anyway. If you end up not using the new language, that's not a problem. You will learn new techniques with each new programming language and this also helps you to improve your overall programming skills even for other languages. Also, having some background in a similar programming language makes it reasonably easy to get started. Besides that, learning a new programming language is kick-a** fun!

Superficially, Perl seems to have many similarities to Ruby (but, of course, it is entirely different to Perl when you look closer), which pushed me towards Ruby instead of Python. I have tried Python a couple of times before, and I managed to write good code, but I never felt satisfied with the language. I didn't love the syntax, especially the indentations used; they always confused me. I don't dislike Python, but I don't prefer to program in it if I have a choice, especially when there are more propelling alternatives available. Personally, it's so much more fun to program in Ruby than in Python.

Yukihiro Matsumoto, the inventor of Ruby, said: "I wanted a scripting language that was more powerful than Perl and more object-oriented than Python" - So I can see where some of the similarities come from. I personally don't believe that Ruby is more powerful than Perl, though, especially when you take CPAN and/or Perl 6 (now known as Raku) into the equation. Well, it all depends on what you mean with "more powerful". But I want to stay pragmatic and use what's already used at my workplace.
I wrote a lot of Ruby code over the last couple of years. There were many small to medium-sized tools and other projects such as Nagios monitoring checks, even an internal monitoring & reporting site based on Sinatra. All Ruby scripts I wrote do their work well; I didn't encounter any significant problems using Ruby for any of these tasks. Of course, there's nothing that couldn't be written in Perl (or Python), though, after all, these languages are all Turing-complete and all these languages also come with a huge set of 3rd party libraries :-).
I don't use Ruby for all programming projects, though.
For all other in-between tasks I mainly use the Ruby programming language (unless I decide to give something new a shot once in a while).
As a Site Reliability Engineer there were many tasks and problems to be solved as efficiently and quickly as possible and, of course, without bugs. So I learned Ruby relatively fast by doing and the occasional web search for "how to do thing X". I always was eager to get the problem at hand solved and as long as the code solved the problem I usually was happy.
Until now, I never read a whole book or took a course on Ruby. As a result, I found myself writing Ruby in a Perl-ish procedural style (with Perl, you can do object-oriented programming too, but Perl wasn't designed from the ground up to be an object-oriented language). I didn't take advantage of all the specialities Ruby has to offer as I invested most of my time in the problems at hand and not in the Ruby idiomatic way of doing things.
An unexpected benefit was that most of my Ruby code (probably not all, there are always dark corners in some old code bases lurking around) was easy to follow and extend or fix, even by people who usually don't speak Ruby, as there wasn't too much magic involved in my code - However, I could have done better still. Looking at other Ruby projects, I noticed over time that there is so much more to the language I wanted to explore. For example new techniques and the Ruby best practise, and much more about how things work under the hood, I wanted to learn about.
I do have an O'Reilly Safari Online subscription (thank you, employer). To my liking, I found the "The Well-Grounded Rubyist" book there (the text version and also the video version of it). I watched the video version for a couple of weeks, chunking the content into small pieces so it was able to fit into my schedule, increasing the playback speed for the topics I knew already well enough and slowed it down to actual pace when there was something new to learn and occasionally jumped back to the text book to review what I just learned. To my satisfaction, I was already familiar with over half of the language. But there was still the big chunk, especially how the magic happens under the hood in Ruby, which I missed out on, but I am happy now to be aware of it now.
I also loved the occasional dry humour in the book: "An enumerator is like a brain in a science fiction movie, sitting on a table with no connection to a body but still able to think". :-)
Will I rewrite and refactor all of my existing Ruby programs? Probably not, as they all do their work as intended. Some of these scripts will be eventually replaced or retired. But depending on the situation, I might refactor a module, class or a method or two once in a while. I already knew how to program in an object-oriented style from other languages (e.g. Java, C++, Perl Moose and plain) before I started Ruby, so my existing Ruby code is not as bad as you might assume after reading this article :-). In contrast to Java/C++, Ruby is a dynamic language, and the idiomatic ways of doing things differs from statically typed languages.
These are my key takeaways. These only point out some specific things I have learned, and represent, by far, not everything I've learned from the book.
In Ruby, everything is an object. However, Ruby is not Smalltalk. It depends on what you mean by "everything". Fixnums are objects. Classes also are, as instances of class Class. Methods, operators and blocks aren't but can be wrapped by objects via a "Proc". A simple assignment is not and can't. Statements like "while" also aren't and can't. Comments obviously also fall in the latter group. Ruby is more object-oriented than everything else I have ever seen, except for Smalltalk.
In Ruby, like in Java/C++, classes are classes, objects are instances of classes, and there are class inheritances. There is single inheritance in Ruby, but with the power of mixing in modules, you can extend your classes in a better way than multiple class inheritances (like in C++) would allow. It's also different to Java interfaces, as interfaces in Java only come with the method prototypes and not with the actual method implementations like Ruby modules.
In Ruby, you can also have singleton objects. A singleton object can be an instance of a class but be modified after its creation (e.g. a method added to only this particular instance after its instantiation). Or, another variant of a singleton object is a class (yes, classes are also objects in Ruby). All of that is way better described in the book, so have a read by yourself if you are confused now; just remember: Rubys object system is very dynamic and flexible. At runtime, you can add and modify classes, objects of classes, singleton objects and modules. You don't need to restart the Ruby interpreter; you can change the code during runtime dynamically through Ruby code.
Due to Ruby's flexibility through object individualization (e.g. adding methods at runtime, or changing the core behaviour of classes, catching unknown method calls and dynamically dispatch and/or generate the missing methods via the "method_missing" method), Ruby is a very good language to write your own small domain specific language (DSL) on top of Ruby syntax. I only noticed that after reading this book. Maybe, this is one of the reasons why even the configuration management system Puppet once tried to use a Ruby DSL instead of the Puppet DSL for its manifests. I am not sure why the project got abandoned though, probably it has to do with performance. Do be honest, Ruby is not the fastest language, but it is fast enough for most use cases. And, especially from Ruby 3, performance is one of the main things being worked on currently. If I want performance, I can always use another programming language.
Ruby will fall back to the default "self" object if you don't specify an object method receiver. To give you an example, some more explanation is needed: There is the "Kernel" module mixed into almost every Ruby object. For example, "puts" is just a method of module "Kernel". When you write "puts :foo", Ruby sends the message "puts" to the current object "self". The class of object "self" is "Object". Class Object has module "Kernel" mixed in, and "Kernel" defines the method "puts".
>> self => main >> self.class => Object >> self.class.included_modules => [PP::ObjectMixin, Kernel] >> Kernel.class => Module >> Kernel.methods.grep(/puts/) => [:puts] >> puts 'Hello Ruby' Hello Ruby => nil >> self.puts 'Hello World' Hello World => nil
Ruby offers a lot of syntactic sugar and seemingly magic, but it all comes back to objects and messages to objects under the hood. As all is hidden in objects, you can unwrap and even change the magic and see what's happening under the hood. Then, suddenly everything makes so much sense.
Ruby embraces an object-oriented programming style. But there is good news for fans of the functional programming paradigm: From immutable data (frozen objects), pure functions, lambdas and higher-order functions, lazy evaluation, tail-recursion optimization, method chaining, currying and partial function application, all of that is there. I am delighted about that, as I am a big fan of functional programming (having played with Haskell and Standard ML before).
Remember, however, that Ruby is not a pure functional programming language. You, the Rubyist, need to explicitly decide when to apply a functional style, as, by heart, Ruby is designed to be an object-oriented language. The language will not enforce side effect avoidance, and you will have to enable tail-recursion optimization (as of Ruby 2.5) explicitly, and variables/objects aren't immutable by default either. But that all does not hinder you from using these features.
I liked this book so much so that I even bought myself a (used) paper copy of it. To my delight, there was also a free eBook version in ePub format included, which I now have on my Kobo Forma eBook reader. :-)
Will I abandon my beloved Perl? Probably not. There are also some Perl scripts I use at work. But unfortunately I only have a limited amount of time and I have to use it wisely. I might look into Raku (formerly known as Perl 6) next year and use it for a personal pet project, who knows. :-). I also highly recommend reading the two Perl books "Modern Perl" and "Higher-Order Perl".
E-Mail your comments to hi@paul.cyou :-)
Back to the main site
o .,<>., o
|\/\/\/\/|
'========'
(_ SSSSSSs
)a'`SSSSSs
/_ SSSSSS
.=## SSSSS
.#### SSSSs
###::::SSSSS
.;:::""""SSS
.:;:' . . \\
.::/ ' .'|
.::( . |
:::) \
/\( /
/) ( |
.' \ . ./ /
_-' |\ . |
_..--.. . /"---\ | ` | . |
-=====================,' _ \=(*#(7.#####() | `/_.. , (
_.-''``';'-''-) ,. \ ' '+/// | .'/ \ ``-.) \
,' _.- (( `-' `._\ `` \_/_.' ) /`-._ ) |
,'\ ,' _.'.`:-. \.-' / <_L )" |
_/ `._,' ,')`; `-'`' | L / /
/ `. ,' ,|_/ / \ ( <_-' \
\ / `./ ' / /,' \ /|` `. |
)\ /`._ ,'`._.-\ |) \'
/ `.' )-'.-,' )__) |\ `|
: /`. `.._(--.`':`':/ \ ) \ \
|::::\ ,'/::;-)) / ( )`. |
||::::: . .::': :`-( |/ . |
||::::| . :| |==[]=: . - \
|||:::| : || : | | /\ ` |
___ ___ '|;:::| | |' \=[]=| / \ \
| /_ ||``|||::::: | ; | | | \_.'\_ `-.
: \_``[]--[]|::::'\_;' )-'..`._ .-'\``:: ` . \
\___.>`''-.||:.__,' SSt |_______`> <_____:::. . . \ _/
`+a:f:......jrei'''

paul in uranus in gemtexter on 🌱 main
❯ wc -l gemtexter lib/*
117 gemtexter
59 lib/assert.source.sh
128 lib/atomfeed.source.sh
64 lib/gemfeed.source.sh
161 lib/generate.source.sh
50 lib/git.source.sh
162 lib/html.source.sh
30 lib/log.source.sh
63 lib/md.source.sh
834 total
gemtext='=> http://example.org Description of the link'
assert::equals "$(generate::make_link html "$gemtext")" \
'<a class="textlink" href="http://example.org">Description of the link</a><br />'
gemtext='=> http://example.org Description of the link'
assert::equals "$(generate::make_link md "$gemtext")" \
'[Description of the link](http://example.org) '
.---------------------------.
/,--..---..---..---..---..--. `.
//___||___||___||___||___||___\_|
[j__ ######################## [_|
\============================|
.==| |"""||"""||"""||"""| |"""||
/======"---""---""---""---"=| =||
|____ []* ____ | ==||
// \\ // \\ |===|| hjw
"\__/"---------------"\__/"-+---+'
#!/bin/bash
#!/usr/bin/env bash
# All fits on one line command1 | command2 # Long commands command1 \ | command2 \ | command3 \ | command4
# Long commands
command1 |
command2 |
command3 |
command4
greet () {
local -r greeting="${1}"
local -r name="${2}"
echo "${greeting} ${name}!"
}
say_hello_to_paul () {
local -r greeting=Hello
local -r name=Paul
echo "$greeting $name!"
}
declare FOO=bar
# Curly braces around FOO are necessary
echo "foo${FOO}baz"
# Prefer this:
addition=$(( X + Y ))
substitution="${string/#foo/bar}"
# Instead of this:
addition="$(expr "${X}" + "${Y}")"
substitution="$(echo "${string}" | sed -e 's/^foo/bar/')"
declare -r SUGAR_FREE=yes
declare -r I_NEED_THE_BUZZ=no
buy_soda () {
local -r sugar_free=$1
if [[ $sugar_free == yes ]]; then
echo 'Diet Dr. Pepper'
else
echo 'Pepsi Coke'
fi
}
buy_soda $I_NEED_THE_BUZZ
# What does this set? # Did it succeed? In part or whole? eval $(set_my_variables) # What happens if one of the returned values has a space in it? variable="$(eval some_function)"
% cat vars.source.sh declare foo=bar declare bar=baz declare bay=foo % bash -c 'source vars.source.sh; echo $foo $bar $baz' bar baz foo
% cat vars.sh #!/usr/bin/env bash cat <<END declare date="$(date)" declare user=$USER END % bash -c 'source <(./vars.sh); echo "Hello $user, it is $date"' Hello paul, it is Sat 15 May 19:21:12 BST 2021
filter_lines () {
echo 'Start filtering lines in a fancy way!' >&2
grep ... | sed ....
}
process_lines () {
echo 'Start processing line by line!' >&2
while read -r line; do
... do something and produce a result...
echo "$result"
done
}
# Do some post-processing of the data
postprocess_lines () {
echo 'Start removing duplicates!' >&2
sort -u
}
genreate_report () {
echo 'My boss wants to have a report!' >&2
tee outfile.txt
wc -l outfile.txt
}
main () {
filter_lines |
process_lines |
postprocess_lines |
generate_report
}
main
some_function () {
local -r param_foo="$1"; shift
local -r param_baz="$1"; shift
local -r param_bay="$1"; shift
...
}
some_function () {
local -r param_foo="$1"; shift
local -r param_bar="$1"; shift
local -r param_baz="$1"; shift
local -r param_bay="$1"; shift
...
}
some_function () {
local -r param_bar="$1"; shift
local -r param_baz="$1"; shift
local -r param_bay="$1"; shift
...
}
set -e grep -q foo <<< bar echo Jo
#!/usr/bin/env bash
set -e
some_function () {
.. some critical code
...
set +e
# Grep might fail, but that's OK now
grep ....
local -i ec=$?
set -e
.. critical code continues ...
if [[ $ec -ne 0 ]]; then
...
fi
...
}
if [[ "${my_var}" > 3 ]]; then
# True for 4, false for 22.
do_something
fi
if (( my_var > 3 )); then
do_something
fi
if [[ "${my_var}" -gt 3 ]]; then
do_something
fi
tar -cf - ./* | ( cd "${dir}" && tar -xf - )
if (( PIPESTATUS[0] != 0 || PIPESTATUS[1] != 0 )); then
echo "Unable to tar files to ${dir}" >&2
fi
tar -cf - ./* | ( cd "${DIR}" && tar -xf - )
return_codes=( "${PIPESTATUS[@]}" )
if (( return_codes[0] != 0 )); then
do_something
fi
if (( return_codes[1] != 0 )); then
do_something_else
fi
/\
/ \
| |
|NASA|
| |
| |
| |
' `
|Gemini|
| |
|______|
'-`'-` .
/ . \'\ . .'
''( .'\.' ' .;'
'.;.;' ;'.;' ..;;' AsH





dtail –servers serverlist.txt –files ‘/var/log/*.log’ –regex ‘(?i:error)’
Published at 2018-06-01T14:50:29+01:00; Updated at 2021-05-08
.---.
/ \
\.@-@./
/`\_/`\
// _ \\
| \ )|_
/`\_`> <_/ \
jgs\__/'---'\__/
This text first was published in the german IT-Administrator computer Magazine. 3 years have passed since and I decided to publish it on my blog too.
https://www.admin-magazin.de/Das-Heft/2018/06/Realistische-Lasttests-mit-I-O-RiotI havn't worked on I/O Riot for some time now, but all what is written here is still valid. I am still using I/O Riot to debug I/O issues and pattern once in a while, so by all means the tool is not obsolete yet. The tool even helped to resolve a major production incident at work caused by disk I/O.
I am eagerly looking forward to revamp I/O Riot so that it uses the new BPF Linux capabilities instead of plain old Systemtap (or alternatively: Newer versions of Systemtap can also use BPF as the backend I have learned). Also, when I wrote I/O Riot initially, I didn't have any experience with the Go programming language yet and therefore I wrote it in C. Once it gets revamped I might consider using Go instead of C as it would spare me from many segmentation faults and headaches during development ;-). I might also just stick to C for plain performance reasons and just refactor the code dealing with concurrency.
Pleace notice that some of the screenshots show the command "ioreplay" instead of "ioriot". That's because the name has changed after taking those.
With I/O Riot IT administrators can load test and optimize the I/O subsystem of Linux-based operating systems. The tool makes it possible to record I/O patterns and replay them at a later time as often as desired. This means bottlenecks can be reproduced and eradicated.
When storing huge amounts of data, such as more than 200 billion archived emails at Mimecast, it's not only the available storage capacity that matters, but also the data throughput and latency. At the same time, operating costs must be kept as low as possible. The more systems involved, the more important it is to optimize the hardware, the operating system and the applications running on it.
Conventional I/O benchmarking: Administrators usually use open source benchmarking tools like IOZone and bonnie++. Available database systems such as Redis and MySQL come with their own benchmarking tools. The common problem with these tools is that they work with prescribed artificial I/O patterns. Although this can test both sequential and randomized data access, the patterns do not correspond to what can be found on production systems.
Testing by load test environment: Another option is to use a separate load test environment in which, as far as possible, a production environment with all its dependencies is simulated. However, an environment consisting of many microservices is very complex. Microservices are usually managed by different teams, which means extra coordination effort for each load test. Another challenge is to generate the load as authentically as possible so that the patterns correspond to a productive environment. Such a load test environment can only handle as many requests as its weakest link can handle. For example, load generators send many read and write requests to a frontend microservice, whereby the frontend forwards the requests to a backend microservice responsible for storing the data. If the frontend service does not process the requests efficiently enough, the backend service is not well utilized in the first place. As a rule, all microservices are clustered across many servers, which makes everything even more complicated. Under all these conditions it is very difficult to test I/O of separate backend systems. Moreover, for many small and medium-sized companies, a separate load test environment would not be feasible for cost reasons.
Testing in the production environment: For these reasons, benchmarks are often carried out in the production environment. In order to derive value from this such tests are especially performed during peak hours when systems are under high load. However, testing on production systems is associated with risks and can lead to failure or loss of data without adequate protection.
For email archiving, Mimecast uses an internally developed microservice, which is operated directly on Linux-based storage systems. A storage cluster is divided into several replication volumes. Data is always replicated three times across two secure data centers. Customer data is automatically allocated to one or more volumes, depending on throughput, so that all volumes are automatically assigned the same load. Customer data is archived on conventional, but inexpensive hard disks with several terabytes of storage capacity each. I/O benchmarking proved difficult for all the reasons mentioned above. Furthermore, there are no ready-made tools for this purpose in the case of self-developed software. The service operates on many block devices simultaneously, which can make the RAID controller a bottleneck. None of the freely available benchmarking tools can test several block devices at the same time without extra effort. In addition, emails typically consist of many small files. Randomized access to many small files is particularly inefficient. In addition to many software adaptations, the hardware and operating system must also be optimized.
Mimecast encourages employees to be innovative and pursue their own ideas in the form of an internal competition, Pet Project. The goal of the pet project I/O Riot was to simplify OS and hardware level I/O benchmarking. The first prototype of I/O Riot was awarded an internal roadmap prize in the spring of 2017. A few months later, I/O Riot was used to reduce write latency in the storage clusters by about 50%. The improvement was first verified by I/O replay on a test system and then successively applied to all storage systems. I/O Riot was also used to resolve a production incident caused by disk I/O load.
First, all I/O events are logged to a file on a production system with I/O Riot. It is then copied to a test system where all events are replayed in the same way. The crucial point here is that you can reproduce I/O patterns as they are found on a production system as often as you like on a test system. This results in the possibility of optimizing the set screws on the system after each run.
I/O Riot was tested under CentOS 7.2 x86_64. For compiling, the GNU C compiler and Systemtap including kernel debug information are required. Other Linux distributions are theoretically compatible but untested. First of all, you should update the systems involved as follows:
% sudo yum update
If the kernel is updated, please restart the system. The installation would be done without a restart but this would complicate the installation. The installed kernel version should always correspond to the currently running kernel. You can then install I/O Riot as follows:
% sudo yum install gcc git systemtap yum-utils kernel-devel-$(uname -r) % sudo debuginfo-install kernel-$(uname -r) % git clone https://github.com/mimecast/ioriot % cd ioriot % make % sudo make install % export PATH=$PATH:/opt/ioriot/bin
Note: It is not best practice to install any compilers on production systems. For further information please have a look at the enclosed README.md.
All I/O events are kernel related. If a process wants to perform an I/O operation, such as opening a file, it must inform the kernel of this by a system call (short syscall). I/O Riot relies on the Systemtap tool to record I/O syscalls. Systemtap, available for all popular Linux distributions, helps you to take a look at the running kernel in productive environments, which makes it predestined to monitor all I/O-relevant Linux syscalls and log them to a file. Other tools, such as strace, are not an alternative because they slow down the system too much.
During recording, ioriot acts as a wrapper and executes all relevant Systemtap commands for you. Use the following command to log all events to io.capture:
% sudo ioriot -c io.capture

A Ctrl-C (SIGINT) stops recording prematurely. Otherwise, ioriot terminates itself automatically after 1 hour. Depending on the system load, the output file can grow to several gigabytes. Only metadata is logged, not the read and written data itself. When replaying later, only random data is used. Under certain circumstances, Systemtap may omit some system calls and issue warnings. This is to ensure that Systemtap does not consume too many resources.
Then copy io.capture to a test system. The log also contains all accesses to the pseudo file systems devfs, sysfs and procfs. This makes little sense, which is why you must first generate a cleaned and playable version io.replay from io.capture as follows:
% sudo ioriot -c io.capture -r io.replay -u $USER -n TESTNAME
The parameter -n allows you to assign a freely selectable test name. An arbitrary system user under which the test is to be played is specified via paramater -u.
The test will most likely want to access existing files. These are files the test wants to read but does not create by itself. The existence of these must be ensured before the test. You can do this as follows:
% sudo ioriot -i io.replay
To avoid any damage to the running system, ioreplay only works in special directories. The tool creates a separate subdirectory for each file system mount point (e.g. /, /usr/local, /store/00,...) (here: /.ioriot/TESTNAME, /usr/local/.ioriot/TESTNAME, /store/00/.ioriot/TESTNAME,...). By default, the working directory of ioriot is /usr/local/ioriot/TESTNAME.

You must re-initialize the environment before each run. Data from previous tests will be moved to a trash directory automatically, which can be finally deleted with "sudo ioriot -P".
After initialization, you can replay the log with -r. You can use -R to initiate both test initialization and replay in a single command and -S can be used to specify a file in which statistics are written after the test run.
You can also influence the playback speed: "-s 0" is interpreted as "Playback as fast as possible" and is the default setting. With "-s 1" all operations are performed at original speed. "-s 2" would double the playback speed and "-s 0.5" would halve it.

As an initial test, for example, you could compare the two Linux I/O schedulers CFQ and Deadline and check which scheduler the test runs the fastest. They run the test separately for each scheduler. The following shell loop iterates through all attached block devices of the system and changes their I/O scheduler to the one specified in variable $new_scheduler (in this case either cfq or deadline). Subsequently, all I/O events from the io.replay protocol are played back. At the end, an output file with statistics is generated:
% new_scheduler=cfq
% for scheduler in /sys/block/*/queue/scheduler; do
echo $new_scheduler | sudo tee $scheduler
done
% sudo ioriot -R io.replay -S cfq.txt
% new_scheduler=deadline
% for scheduler in /sys/block/*/queue/scheduler; do
echo $new_scheduler | sudo tee $scheduler
done
% sudo ioriot -R io.replay -S deadline.txt
According to the results, the test could run 940 seconds faster with Deadline Scheduler:
% cat cfq.txt Num workers: 4 hreads per worker: 128 otal threads: 512 Highest loadavg: 259.29 Performed ioops: 218624596 Average ioops/s: 101544.17 Time ahead: 1452s Total time: 2153.00s % cat deadline.txt Num workers: 4 Threads per worker: 128 Total threads: 512 Highest loadavg: 342.45 Performed ioops: 218624596 Average ioops/s: 180234.62 Time ahead: 2392s Total time: 1213.00s
In any case, you should also set up a time series database, such as Graphite, where the I/O throughput can be plotted. Figures 4 and 5 show the read and write access times of both tests. The break-in makes it clear when the CFQ test ended and the deadline test was started. The reading latency of both tests is similar. Write latency is dramatically improved using the Deadline Scheduler.


You should also take a look at the iostat tool. The iostat screenshot shows the output of iostat -x 10 during a test run. As you can see, a block device is fully loaded with 99% utilization, while all other block devices still have sufficient buffer. This could be an indication of poor data distribution in the storage system and is worth pursuing. It is not uncommon for I/O Riot to reveal software problems.

The tool has already proven to be very useful and will continue to be actively developed as time and priority permits. Mimecast intends to be an ongoing contributor to Open Source. You can find I/O Riot at:
https://github.com/mimecast/ioriotSystemtap is a tool for the instrumentation of the Linux kernel. The tool provides an AWK-like programming language. Programs written in it are compiled from Systemtap to C- and then into a dynamically loadable kernel module. Loaded into the kernel, the program has access to Linux internals. A Systemtap program written for I/O Riot monitors when, with which parameters, at which time, and from which process I/O syscalls take place and their return values.
For example, the open syscall opens a file and returns the responsible file descriptor. The read and write syscalls can operate on a file descriptor and return the number of read or written bytes. The close syscall closes a given file descriptor. I/O Riot comes with a ready-made Systemtap program, which you have already compiled into a kernel module and installed to /opt/ioriot. In addition to open, read and close, it logs many other I/O-relevant calls.
https://sourceware.org/systemtap/E-Mail your comments to hi@paul.cyou :-)
Back to the main site
___ ___ ____ ____
/ _ \ / _ \| _ \ / ___|
| | | | | | | |_) |____| |
| |_| | |_| | __/_____| |___
\___/ \___/|_| \____|
#include <stdio.h>
typedef struct {
double (*calculate)(const double, const double);
char *name;
} something_s;
double multiplication(const double a, const double b) {
return a * b;
}
double division(const double a, const double b) {
return a / b;
}
int main(void) {
something_s mult = (something_s) {
.calculate = multiplication,
.name = "Multiplication"
};
something_s div = (something_s) {
.calculate = division,
.name = "Division"
};
const double a = 3, b = 2;
printf("%s(%f, %f) => %f\n", mult.name, a, b, mult.calculate(a,b));
printf("%s(%f, %f) => %f\n", div.name, a, b, div.calculate(a,b));
}
printf("%s(%f, %f) => %f\n", mult.name, a, b, mult.calculate(a,b));
printf("%s(%f, %f) => %f\n", div.name, a, b, div.calculate(a,b));
printf("%s(%f, %f) => %f\n", mult.name, a, b, (*mult.calculate)(a,b));
printf("%s(%f, %f) => %f\n", div.name, a, b, (*div.calculate)(a,b));
pbuetow ~/git/blog/source [38268]% gcc oop-c-example.c -o oop-c-example pbuetow ~/git/blog/source [38269]% ./oop-c-example Multiplication(3.000000, 2.000000) => 6.000000 Division(3.000000, 2.000000) => 1.500000
mult.calculate(mult,a,b));
Published at 2016-05-22T18:59:01+01:00
Finally, I had time to deploy my authoritative DNS servers (master and slave) for my domains "buetow.org" and "buetow.zone". My domain name provider is Schlund Technologies. They allow their customers to edit the DNS records (BIND files) manually. And they also allow you to set your authoritative DNS servers for your domains. From now, I am making use of that option.
Schlund TechnologiesTo set up my authoritative DNS servers, I installed a FreeBSD Jail dedicated for DNS with Puppet on my root machine as follows:
include freebsd
freebsd::ipalias { '2a01:4f8:120:30e8::14':
ensure => up,
proto => 'inet6',
preflen => '64',
interface => 're0',
aliasnum => '5',
}
include jail::freebsd
class { 'jail':
ensure => present,
jails_config => {
dns => {
'_ensure' => present,
'_type' => 'freebsd',
'_mirror' => 'ftp://ftp.de.freebsd.org',
'_remote_path' => 'FreeBSD/releases/amd64/10.1-RELEASE',
'_dists' => [ 'base.txz', 'doc.txz', ],
'_ensure_directories' => [ '/opt', '/opt/enc' ],
'host.hostname' => "'dns.ian.buetow.org'",
'ip4.addr' => '192.168.0.15',
'ip6.addr' => '2a01:4f8:120:30e8::15',
},
.
.
}
}
Please note that "dns.ian.buetow.org" is just the Jail name of the master DNS server (and "caprica.ian.buetow.org" the name of the Jail for the slave DNS server) and that I am using the DNS names "dns1.buetow.org" (master) and "dns2.buetow.org" (slave) for the actual service names (these are the DNS servers visible to the public). Please also note that the IPv4 address is an internal one. I have a PF to use NAT and PAT. The DNS ports are being forwarded (TCP and UDP) to that Jail. By default, all ports are blocked, so I am adding an exception rule for the IPv6 address. These are the PF rules in use:
% cat /etc/pf.conf
.
.
# dns.ian.buetow.org
rdr pass on re0 proto tcp from any to $pub_ip port {53} -> 192.168.0.15
rdr pass on re0 proto udp from any to $pub_ip port {53} -> 192.168.0.15
pass in on re0 inet6 proto tcp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state
pass in on re0 inet6 proto udp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state
.
.
In "manifests/dns.pp" (the Puppet manifest for the Master DNS Jail itself), I configured the BIND DNS server this way:
class { 'bind_freebsd':
config => "puppet:///files/bind/named.${::hostname}.conf",
dynamic_config => "puppet:///files/bind/dynamic.${::hostname}",
}
The Puppet module is a pretty simple one. It installs the file "/usr/local/etc/named/named.conf" and it populates the "/usr/local/etc/named/dynamicdb" directory with all my zone files.
Once (Puppet-) applied inside of the Jail, I get this:
paul uranus:~/git/blog/source [4268]% ssh admin@dns1.buetow.org.buetow.org pgrep -lf named
60748 /usr/local/sbin/named -u bind -c /usr/local/etc/namedb/named.conf
paul uranus:~/git/blog/source [4269]% ssh admin@dns1.buetow.org.buetow.org tail -n 13 /usr/local/etc/namedb/named.conf
zone "buetow.org" {
type master;
notify yes;
allow-update { key "buetoworgkey"; };
file "/usr/local/etc/namedb/dynamic/buetow.org";
};
zone "buetow.zone" {
type master;
notify yes;
allow-update { key "buetoworgkey"; };
file "/usr/local/etc/namedb/dynamic/buetow.zone";
};
paul uranus:~/git/blog/source [4277]% ssh admin@dns1.buetow.org.buetow.org cat /usr/local/etc/namedb/dynamic/buetow.org
$TTL 3600
@ IN SOA dns1.buetow.org. domains.buetow.org. (
25 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; Infrastructure domains
@ IN NS dns1
@ IN NS dns2
* 300 IN CNAME web.ian
buetow.org. 86400 IN A 78.46.80.70
buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:11
buetow.org. 86400 IN MX 10 mail.ian
dns1 86400 IN A 78.46.80.70
dns1 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:15
dns2 86400 IN A 164.177.171.32
dns2 86400 IN AAAA 2a03:2500:1:6:20::
.
.
.
.
That is my master DNS server. My slave DNS server runs in another Jail on another bare-metal machine. Everything is set up similar to the master DNS server. However, that server is located in a different DC and different IP subnets. The only difference is the "named.conf". It's configured to be a slave, and that means that the "dynamicdb" gets populated by BIND itself while doing zone transfers from the master.
paul uranus:~/git/blog/source [4279]% ssh admin@dns2.buetow.org tail -n 11 /usr/local/etc/namedb/named.conf
zone "buetow.org" {
type slave;
masters { 78.46.80.70; };
file "/usr/local/etc/namedb/dynamic/buetow.org";
};
zone "buetow.zone" {
type slave;
masters { 78.46.80.70; };
file "/usr/local/etc/namedb/dynamic/buetow.zone";
};
The result looks like this now:
% dig -t ns buetow.org ; <<>> DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 <<>> -t ns buetow.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37883 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;buetow.org. IN NS ;; ANSWER SECTION: buetow.org. 600 IN NS dns2.buetow.org. buetow.org. 600 IN NS dns1.buetow.org. ;; Query time: 41 msec ;; SERVER: 192.168.1.254#53(192.168.1.254) ;; WHEN: Sun May 22 11:34:11 BST 2016 ;; MSG SIZE rcvd: 77 % dig -t any buetow.org @dns1.buetow.org ; <<>> DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 <<>> -t any buetow.org @dns1.buetow.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49876 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 7 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;buetow.org. IN ANY ;; ANSWER SECTION: buetow.org. 86400 IN A 78.46.80.70 buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::11 buetow.org. 86400 IN MX 10 mail.ian.buetow.org. buetow.org. 3600 IN SOA dns1.buetow.org. domains.buetow.org. 25 604800 86400 2419200 604800 buetow.org. 3600 IN NS dns2.buetow.org. buetow.org. 3600 IN NS dns1.buetow.org. ;; ADDITIONAL SECTION: mail.ian.buetow.org. 86400 IN A 78.46.80.70 dns1.buetow.org. 86400 IN A 78.46.80.70 dns2.buetow.org. 86400 IN A 164.177.171.32 mail.ian.buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::12 dns1.buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8::15 dns2.buetow.org. 86400 IN AAAA 2a03:2500:1:6:20:: ;; Query time: 42 msec ;; SERVER: 78.46.80.70#53(78.46.80.70) ;; WHEN: Sun May 22 11:34:41 BST 2016 ;; MSG SIZE rcvd: 322
For monitoring, I am using Icinga2 (I am operating two Icinga2 instances in two different DCs). I may have to post another blog article about Icinga2, but to get the idea, these were the snippets added to my Icinga2 configuration:
apply Service "dig" {
import "generic-service"
check_command = "dig"
vars.dig_lookup = "buetow.org"
vars.timeout = 30
assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org"
}
apply Service "dig6" {
import "generic-service"
check_command = "dig"
vars.dig_lookup = "buetow.org"
vars.timeout = 30
vars.check_ipv6 = true
assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org"
}
Whenever I have to change a DNS entry, all I have to do is:
That's much more comfortable now than manually clicking at some web UIs at Schlund Technologies.
E-Mail your comments to hi@paul.cyou :-)
Back to the main sitePublished at 2016-04-16T22:43:42+01:00
________________
|# : : #|
| : ZFS/GELI : |________________
| : Offsite : |# : : #|
| : Backup 1 : | : ZFS/GELI : |
| :___________: | : Offsite : |
| _________ | : Backup 2 : |
| | __ | | :___________: |
| || | | | _________ |
\____||__|_____|_| | __ | |
| || | | |
\____||__|_____|__|
I enhanced the procedure a bit. From now on, I have two external 2TB USB hard drives. Both are set up precisely the same way. To decrease the probability that both drives will not fail simultaneously, they are of different brands. One drive is kept at a secret location. The other one is held at home, right next to my HP MicroServer.
Whenever I update the offsite backup, I am doing it to the drive, which is kept locally. Afterwards, I bring it to the secret location, swap the drives, and bring the other back home. This ensures that I will always have an offsite backup available at a different location than my home - even while updating one copy of it.
Furthermore, I added scrubbing ("zpool scrub...") to the script. It ensures that the file system is consistent and that there are no bad blocks on the disk and the file system. To increase the reliability, I also run a "zfs set copies=2 zroot". That setting is also synchronized to the offsite ZFS pool. ZFS stores every data block to disk twice now. Yes, it consumes twice as much disk space, making it better fault-tolerant against hardware errors (e.g. only individual disk sectors going bad).
E-Mail your comments to hi@paul.cyou :-)
Back to the main sitePublished at 2016-04-09T18:29:47+01:00
__ __
(( \---/ ))
)__ __(
/ ()___() \
\ /(_)\ /
\ \_|_/ /
_______> <_______
//\ |>o<| /\\
\\/___ ___\//
| |
| |
| |
| |
`--....---'
\ \
\ `. hjw
\ `.
Over the last couple of years I wrote quite a few Puppet modules in order to manage my personal server infrastructure. One of them manages FreeBSD Jails and another one ZFS file systems. I thought I would give a brief overview in how it looks and feels.
The ZFS module is a pretty basic one. It does not manage ZFS pools yet as I am not creating them often enough which would justify implementing an automation. But let's see how we can create a ZFS file system (on an already given ZFS pool named ztank):
Puppet snippet:
zfs::create { 'ztank/foo':
ensure => present,
filesystem => '/srv/foo',
require => File['/srv'],
}
Puppet run:
admin alphacentauri:/opt/git/server/puppet/manifests [1212]% puppet.apply Password: Info: Loading facts Info: Loading facts Info: Loading facts Info: Loading facts Notice: Compiled catalog for alphacentauri.home in environment production in 7.14 seconds Info: Applying configuration version '1460189837' Info: mount[files]: allowing * access Info: mount[restricted]: allowing * access Notice: /Stage[main]/Main/Node[alphacentauri]/Zfs::Create[ztank/foo]/Exec[ztank/foo_create]/returns: executed successfully Notice: Finished catalog run in 25.41 seconds admin alphacentauri:~ [1213]% zfs list | grep foo ztank/foo 96K 1.13T 96K /srv/foo admin alphacentauri:~ [1214]% df | grep foo ztank/foo 1214493520 96 1214493424 0% /srv/foo admin alphacentauri:~ [1215]%
The destruction of the file system just requires to set "ensure" to "absent" in Puppet:
zfs::create { 'ztank/foo':
ensure => absent,
filesystem => '/srv/foo',
require => File['/srv'],
}¬
Puppet run:
admin alphacentauri:/opt/git/server/puppet/manifests [1220]% puppet.apply Password: Info: Loading facts Info: Loading facts Info: Loading facts Info: Loading facts Notice: Compiled catalog for alphacentauri.home in environment production in 6.14 seconds Info: Applying configuration version '1460190203' Info: mount[files]: allowing * access Info: mount[restricted]: allowing * access Notice: /Stage[main]/Main/Node[alphacentauri]/Zfs::Create[ztank/foo]/Exec[zfs destroy -r ztank/foo]/returns: executed successfully Notice: Finished catalog run in 22.72 seconds admin alphacentauri:/opt/git/server/puppet/manifests [1221]% zfs list | grep foo zsh: done zfs list | zsh: exit 1 grep foo admin alphacentauri:/opt/git/server/puppet/manifests [1222:1]% df | grep foo zsh: done df | zsh: exit 1 grep foo
Here is an example in how a FreeBSD Jail can be created. The Jail will have its own public IPv6 address. And it will have its own internal IPv4 address with IPv4 NAT to the internet (this is due to the limitation that the host server only got one public IPv4 address which requires sharing between all the Jails).
Furthermore, Puppet will ensure that the Jail will have its own ZFS file system (internally it is using the ZFS module). Please notice that the NAT requires the packet filter to be setup correctly (not covered in this blog post).
include jail::freebsd
# Cloned interface for Jail IPv4 NAT
freebsd::rc_config { 'cloned_interfaces':
value => 'lo1',
}
freebsd::rc_config { 'ipv4_addrs_lo1':
value => '192.168.0.1-24/24'
}
freebsd::ipalias { '2a01:4f8:120:30e8::17':
ensure => up,
proto => 'inet6',
preflen => '64',
interface => 're0',
aliasnum => '8',
}
class { 'jail':
ensure => present,
jails_config => {
sync => {
'_ensure' => present,
'_type' => 'freebsd',
'_mirror' => 'ftp://ftp.de.freebsd.org',
'_remote_path' => 'FreeBSD/releases/amd64/10.1-RELEASE',
'_dists' => [ 'base.txz', 'doc.txz', ],
'_ensure_directories' => [ '/opt', '/opt/enc' ],
'_ensure_zfs' => [ '/sync' ],
'host.hostname' => "'sync.ian.buetow.org'",
'ip4.addr' => '192.168.0.17',
'ip6.addr' => '2a01:4f8:120:30e8::17',
},
}
}
This is how the result looks like:
admin sun:/etc [1939]% puppet.apply
Info: Loading facts
Info: Loading facts
Info: Loading facts
Info: Loading facts
Notice: Compiled catalog for sun.ian.buetow.org in environment production in 1.80 seconds
Info: Applying configuration version '1460190986'
Notice: /Stage[main]/Jail/File[/etc/jail.conf]/ensure: created
Info: mount[files]: allowing * access
Info: mount[restricted]: allowing * access
Info: Computing checksum on file /etc/motd
Info: /Stage[main]/Motd/File[/etc/motd]: Filebucketed /etc/motd to puppet with sum fced1b6e89f50ef2c40b0d7fba9defe8
Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/Zfs::Create[zroot/jail/sync]/Exec[zroot/jail/sync_create]/returns: executed successfully
Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync/opt]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync/opt/enc]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Ensure_zfs[/sync]/Zfs::Create[zroot/jail/sync/sync]/Exec[zroot/jail/sync/sync_create]/returns: executed successfully
Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/jail/sync/.jailbootstrap]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/etc/fstab.jail.sync]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/jail/sync/.jailbootstrap/bootstrap.sh]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/Exec[sync_bootstrap]/returns: executed successfully
Notice: Finished catalog run in 49.72 seconds
admin sun:/etc [1942]% ls -l /jail/sync
total 154
-r--r--r-- 1 root wheel 6198 11 Nov 2014 COPYRIGHT
drwxr-xr-x 2 root wheel 47 11 Nov 2014 bin
drwxr-xr-x 7 root wheel 43 11 Nov 2014 boot
dr-xr-xr-x 2 root wheel 2 11 Nov 2014 dev
drwxr-xr-x 23 root wheel 101 9 Apr 10:37 etc
drwxr-xr-x 3 root wheel 50 11 Nov 2014 lib
drwxr-xr-x 3 root wheel 4 11 Nov 2014 libexec
drwxr-xr-x 2 root wheel 2 11 Nov 2014 media
drwxr-xr-x 2 root wheel 2 11 Nov 2014 mnt
drwxr-xr-x 3 root wheel 3 9 Apr 10:36 opt
dr-xr-xr-x 2 root wheel 2 11 Nov 2014 proc
drwxr-xr-x 2 root wheel 143 11 Nov 2014 rescue
drwxr-xr-x 2 root wheel 6 11 Nov 2014 root
drwxr-xr-x 2 root wheel 132 11 Nov 2014 sbin
drwxr-xr-x 2 root wheel 2 9 Apr 10:36 sync
lrwxr-xr-x 1 root wheel 11 11 Nov 2014 sys -> usr/src/sys
drwxrwxrwt 2 root wheel 2 11 Nov 2014 tmp
drwxr-xr-x 14 root wheel 14 11 Nov 2014 usr
drwxr-xr-x 24 root wheel 24 11 Nov 2014 var
admin sun:/etc [1943]% zfs list | grep sync;df | grep sync
zroot/jail/sync 162M 343G 162M /jail/sync
zroot/jail/sync/sync 144K 343G 144K /jail/sync/sync
/opt/enc 5061624 84248 4572448 2% /jail/sync/opt/enc
zroot/jail/sync 360214972 166372 360048600 0% /jail/sync
zroot/jail/sync/sync 360048744 144 360048600 0% /jail/sync/sync
admin sun:/etc [1944]% cat /etc/fstab.jail.sync
# Generated by Puppet for a Jail.
# Can contain file systems to be mounted curing jail start.
admin sun:/etc [1945]% cat /etc/jail.conf
# Generated by Puppet
allow.chflags = true;
exec.start = '/bin/sh /etc/rc';
exec.stop = '/bin/sh /etc/rc.shutdown';
mount.devfs = true;
mount.fstab = "/etc/fstab.jail.$name";
path = "/jail/$name";
sync {
host.hostname = 'sync.ian.buetow.org';
ip4.addr = 192.168.0.17;
ip6.addr = 2a01:4f8:120:30e8::17;
}
admin sun:/etc [1955]% sudo service jail start sync
Password:
Starting jails: sync.
admin sun:/etc [1956]% jls | grep sync
103 192.168.0.17 sync.ian.buetow.org /jail/sync
admin sun:/etc [1957]% sudo jexec 103 /bin/csh
root@sync:/ # ifconfig -a
re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>
ether 50:46:5d:9f:fd:1e
inet6 2a01:4f8:120:30e8::17 prefixlen 64
nd6 options=8021<PERFORMNUD,AUTO_LINKLOCAL,DEFAULTIF>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lo1: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet 192.168.0.17 netmask 0xffffffff
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
To automatically setup the applications running in the Jail I am using Puppet as well. I wrote a few scripts which bootstrap Puppet inside of a newly created Jail. It is doing the following:
admin sun:~ [1951]% sudo /opt/snonux/local/etc/init.d/enc activate sync
Starting jails: dns.
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/freebsd:10:x86:64/latest, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[sync.ian.buetow.org] Installing pkg-1.7.2...
[sync.ian.buetow.org] Extracting pkg-1.7.2: 100%
Updating FreeBSD repository catalogue...
[sync.ian.buetow.org] Fetching meta.txz: 100% 944 B 0.9kB/s 00:01
[sync.ian.buetow.org] Fetching packagesite.txz: 100% 5 MiB 5.6MB/s 00:01
Processing entries: 100%
FreeBSD repository update completed. 25091 packages processed.
Updating database digests format: 100%
The following 20 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
git: 2.7.4_1
expat: 2.1.0_3
python27: 2.7.11_1
libffi: 3.2.1
indexinfo: 0.2.4
gettext-runtime: 0.19.7
p5-Error: 0.17024
perl5: 5.20.3_9
cvsps: 2.1_1
p5-Authen-SASL: 2.16_1
p5-Digest-HMAC: 1.03_1
p5-GSSAPI: 0.28_1
curl: 7.48.0_1
ca_root_nss: 3.22.2
p5-Net-SMTP-SSL: 1.03
p5-IO-Socket-SSL: 2.024
p5-Net-SSLeay: 1.72
p5-IO-Socket-IP: 0.37
p5-Socket: 2.021
p5-Mozilla-CA: 20160104
The process will require 144 MiB more space.
30 MiB to be downloaded.
[sync.ian.buetow.org] Fetching git-2.7.4_1.txz: 100% 4 MiB 3.7MB/s 00:01
[sync.ian.buetow.org] Fetching expat-2.1.0_3.txz: 100% 98 KiB 100.2kB/s 00:01
[sync.ian.buetow.org] Fetching python27-2.7.11_1.txz: 100% 10 MiB 10.7MB/s 00:01
[sync.ian.buetow.org] Fetching libffi-3.2.1.txz: 100% 35 KiB 36.2kB/s 00:01
[sync.ian.buetow.org] Fetching indexinfo-0.2.4.txz: 100% 5 KiB 5.0kB/s 00:01
[sync.ian.buetow.org] Fetching gettext-runtime-0.19.7.txz: 100% 148 KiB 151.1kB/s 00:01
[sync.ian.buetow.org] Fetching p5-Error-0.17024.txz: 100% 24 KiB 24.8kB/s 00:01
[sync.ian.buetow.org] Fetching perl5-5.20.3_9.txz: 100% 13 MiB 6.9MB/s 00:02
[sync.ian.buetow.org] Fetching cvsps-2.1_1.txz: 100% 41 KiB 42.1kB/s 00:01
[sync.ian.buetow.org] Fetching p5-Authen-SASL-2.16_1.txz: 100% 44 KiB 45.1kB/s 00:01
[sync.ian.buetow.org] Fetching p5-Digest-HMAC-1.03_1.txz: 100% 9 KiB 9.5kB/s 00:01
[sync.ian.buetow.org] Fetching p5-GSSAPI-0.28_1.txz: 100% 41 KiB 41.7kB/s 00:01
[sync.ian.buetow.org] Fetching curl-7.48.0_1.txz: 100% 2 MiB 2.2MB/s 00:01
[sync.ian.buetow.org] Fetching ca_root_nss-3.22.2.txz: 100% 324 KiB 331.4kB/s 00:01
[sync.ian.buetow.org] Fetching p5-Net-SMTP-SSL-1.03.txz: 100% 11 KiB 10.8kB/s 00:01
[sync.ian.buetow.org] Fetching p5-IO-Socket-SSL-2.024.txz: 100% 153 KiB 156.4kB/s 00:01
[sync.ian.buetow.org] Fetching p5-Net-SSLeay-1.72.txz: 100% 234 KiB 239.3kB/s 00:01
[sync.ian.buetow.org] Fetching p5-IO-Socket-IP-0.37.txz: 100% 27 KiB 27.4kB/s 00:01
[sync.ian.buetow.org] Fetching p5-Socket-2.021.txz: 100% 37 KiB 38.0kB/s 00:01
[sync.ian.buetow.org] Fetching p5-Mozilla-CA-20160104.txz: 100% 147 KiB 150.8kB/s 00:01
Checking integrity...
[sync.ian.buetow.org] [1/12] Installing libyaml-0.1.6_2...
[sync.ian.buetow.org] [1/12] Extracting libyaml-0.1.6_2: 100%
[sync.ian.buetow.org] [2/12] Installing libedit-3.1.20150325_2...
[sync.ian.buetow.org] [2/12] Extracting libedit-3.1.20150325_2: 100%
[sync.ian.buetow.org] [3/12] Installing ruby-2.2.4,1...
[sync.ian.buetow.org] [3/12] Extracting ruby-2.2.4,1: 100%
[sync.ian.buetow.org] [4/12] Installing ruby22-gems-2.6.2...
[sync.ian.buetow.org] [4/12] Extracting ruby22-gems-2.6.2: 100%
[sync.ian.buetow.org] [5/12] Installing libxml2-2.9.3...
[sync.ian.buetow.org] [5/12] Extracting libxml2-2.9.3: 100%
[sync.ian.buetow.org] [6/12] Installing dmidecode-3.0...
[sync.ian.buetow.org] [6/12] Extracting dmidecode-3.0: 100%
[sync.ian.buetow.org] [7/12] Installing rubygem-json_pure-1.8.3...
[sync.ian.buetow.org] [7/12] Extracting rubygem-json_pure-1.8.3: 100%
[sync.ian.buetow.org] [8/12] Installing augeas-1.4.0...
[sync.ian.buetow.org] [8/12] Extracting augeas-1.4.0: 100%
[sync.ian.buetow.org] [9/12] Installing rubygem-facter-2.4.4...
[sync.ian.buetow.org] [9/12] Extracting rubygem-facter-2.4.4: 100%
[sync.ian.buetow.org] [10/12] Installing rubygem-hiera1-1.3.4_1...
[sync.ian.buetow.org] [10/12] Extracting rubygem-hiera1-1.3.4_1: 100%
[sync.ian.buetow.org] [11/12] Installing rubygem-ruby-augeas-0.5.0_2...
[sync.ian.buetow.org] [11/12] Extracting rubygem-ruby-augeas-0.5.0_2: 100%
[sync.ian.buetow.org] [12/12] Installing puppet38-3.8.4_1...
===> Creating users and/or groups.
Creating group 'puppet' with gid '814'.
Creating user 'puppet' with uid '814'.
[sync.ian.buetow.org] [12/12] Extracting puppet38-3.8.4_1: 100%
.
.
.
.
.
Looking up update.FreeBSD.org mirrors... 4 mirrors found.
Fetching public key from update4.freebsd.org... done.
Fetching metadata signature for 10.1-RELEASE from update4.freebsd.org... done.
Fetching metadata index... done.
Fetching 2 metadata files... done.
Inspecting system... done.
Preparing to download files... done.
Fetching 874 patches.....10....20....30....
.
.
.
Applying patches... done.
Fetching 1594 files...
Installing updates...
done.
Info: Loading facts
Info: Loading facts
Info: Loading facts
Info: Loading facts
Could not retrieve fact='pkgng_version', resolution='<anonymous>': undefined method `pkgng_enabled' for Facter:Module
Warning: Config file /usr/local/etc/puppet/hiera.yaml not found, using Hiera defaults
Notice: Compiled catalog for sync.ian.buetow.org in environment production in 1.31 seconds
Warning: Found multiple default providers for package: pkgng, gem, pip; using pkgng
Info: Applying configuration version '1460192563'
Notice: /Stage[main]/S_base_freebsd/User[root]/shell: shell changed '/bin/csh' to '/bin/tcsh'
Notice: /Stage[main]/S_user::Root_files/S_user::All_files[root_user]/File[/root/user]/ensure: created
Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/userfiles]/ensure: created
Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/.task]/ensure: created
.
.
.
.
Notice: Finished catalog run in 206.09 seconds
Of course I am operating multiple Jails on the same host this way with Puppet:
All done in a pretty automated manor.
E-Mail your comments to hi@paul.cyou :-)
Back to the main sitePublished at 2016-04-03T22:43:42+01:00
________________ |# : : #| | : ZFS/GELI : | | : Offsite : | | : Backup : | | :___________: | | _________ | | | __ | | | || | | | \____||__|_____|__|
When it comes to data storage and potential data loss, I am a paranoid person. It is due to my job and a personal experience I encountered over ten years ago: A single drive failure and loss of all my data (pictures, music, etc.).
A little about my personal infrastructure: I am running my own (mostly FreeBSD based) root servers (across several countries: Two in Germany, one in Canada, one in Bulgaria) which store all my online data (E-Mail and my Git repositories). I am syncing incremental (and encrypted) ZFS snapshots between these servers forth and back so either data can be recovered from the other server.
Also, I am operating a local server (an HP MicroServer) at home in my apartment. Full snapshots of all ZFS volumes are pulled from the "online" servers to the local server every other week and the incremental ZFS snapshots every day. That local server has a ZFS ZMIRROR with three disks configured (local triple redundancy). I keep up to half a year worth of ZFS snapshots of all volumes. That local server also contains all my offline data such as pictures, private documents, videos, books, various other backups, etc.
Once weekly, all the local server data is copied to two external USB drives as a backup (without the historic snapshots). For simplicity, these USB drives are not formatted with ZFS but with good old UFS. This gives me a chance to recover from a (potential) ZFS disaster. ZFS is a complex thing. Sometimes it is good not to trust complicated things!
Now I am thinking about an offsite backup of all this local data. The problem is that all the data remains on a single physical location: My local MicroServer. What happens when the house burns or my server, including the internal disks and the attached USB drives, gets stolen? My first thought was to back up everything to the "cloud". However, the significant issue here is the limited amount of available upload bandwidth (only 1MBit/s).
The solution is adding another USB drive (2TB) with an encryption container (GELI) and a ZFS pool. The GELI encryption requires a secret key and a secret passphrase. I am updating the data to that drive once every three months (my calendar is reminding me about it), and afterwards, I keep that drive at a secret location outside of my apartment. All the information needed to decrypt (mounting the GELI container) is stored at another (secure) place. Key and passphrase are kept at different sites, though. Even if someone knew of it, he would not be able to decrypt it as some additional insider knowledge would be required as well.
I am thinking of buying a second 2TB USB drive and setting it up the same way as the first one. So I could alternate the backups. One drive would be at the secret location, and the other drive would be at home. And these drives would swap place after each cycle. This would give some security about the failure of that drive, and I would have to go to the secret location only once (swapping the drives) instead of twice (picking that drive up to update the data + bringing it back to the remote location).
E-Mail your comments to hi@paul.cyou :-)
Back to the main site
____ _ _ _
| _ \ ___| |__ _ __ ___ (_) __| |
| | | |/ _ \ '_ \| '__/ _ \| |/ _` |
| |_| | __/ |_) | | | (_) | | (_| |
|____/ \___|_.__/|_| \___/|_|\__,_|

sudo dnf install debootstrap # 5g dd if=/dev/zero of=jessie.img bs=$[ 1024 * 1024 ] \ count=$[ 1024 * 5 ] # Show used loop devices sudo losetup -f # Store the next free one to $loop loop=loopN sudo losetup /dev/$loop jessie.img mkdir jessie sudo mkfs.ext4 /dev/$loop sudo mount /dev/$loop jessie sudo debootstrap --foreign --variant=minbase \ --arch armel jessie jessie/ \ http://http.debian.net/debian sudo umount jessie
adb root && adb wait-for-device && adb shell
mkdir -p /storage/sdcard1/Linux/jessie
exit
# Sparse image problem, may be too big for copying otherwise
gzip jessie.img
# Copy over
adb push jessie.img.gz /storage/sdcard1/Linux/jessie.img.gz
adb shell
cd /storage/sdcard1/Linux
gunzip jessie.img.gz
# Show used loop devices
losetup -f
# Store the next free one to $loop
loop=loopN
# Use the next free one (replace the loop number)
losetup /dev/block/$loop $(pwd)/jessie.img
mount -t ext4 /dev/block/$loop $(pwd)/jessie
# Bind-Mound proc, dev, sys`
busybox mount --bind /proc $(pwd)/jessie/proc
busybox mount --bind /dev $(pwd)/jessie/dev
busybox mount --bind /dev/pts $(pwd)/jessie/dev/pts
busybox mount --bind /sys $(pwd)/jessie/sys
# Bind-Mound the rest of Android
mkdir -p $(pwd)/jessie/storage/sdcard{0,1}
busybox mount --bind /storage/emulated \
$(pwd)/jessie/storage/sdcard0
busybox mount --bind /storage/sdcard1 \
$(pwd)/jessie/storage/sdcard1
# Check mounts
mount | grep jessie
chroot $(pwd)/jessie /bin/bash -l export PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin /debootstrap/debootstrap --second-stage exit # Leave chroot exit # Leave adb shell
# Install script jessie.sh adb push storage/sdcard1/Linux/jessie.sh /storage/sdcard/Linux/jessie.sh adb shell cd /storage/sdcard1/Linux sh jessie.sh enter # Bashrc cat <<END >~/.bashrc export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH export EDITOR=vim hostname $(cat /etc/hostname) END # Fixing an error message while loading the profile sed -i s#id#/usr/bin/id# /etc/profile # Setting the hostname echo phobos > /etc/hostname echo 127.0.0.1 phobos > /etc/hosts hostname phobos # Apt-sources cat <<END > sources.list deb http://ftp.uk.debian.org/debian/ jessie main contrib non-free deb-src http://ftp.uk.debian.org/debian/ jessie main contrib non-free END apt-get update apt-get upgrade apt-get dist-upgrade exit # Exit chroot
sh jessie.sh enter # Setup example serice uptimed apt-get install uptimed cat <<END > /etc/rc.debroid export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH service uptimed status &>/dev/null || service uptimed start exit 0 END chmod 0755 /etc/rc.debroid exit # Exit chroot exit # Exit adb shell
adb push data/local/userinit.sh /data/local/userinit.sh adb shell chmod +x /data/local/userinit.sh exit
#include <stdio.h>
#define $arg function_argument
#define my int
#define sub int
#define BEGIN int main(void)
my $arg;
sub hello() {
printf("Hello, welcome to the Fibonacci Numbers!\n");
printf("This program is all, valid C and C++ and Perl and Raku code!\n");
printf("It calculates all fibonacci numbers from 0 to 9!\n\n");
return 0;
}
sub fibonacci() {
my $n = $arg;
if ($n < 2) {
return $n;
}
$arg = $n - 1;
my $fib1 = fibonacci();
$arg = $n - 2;
my $fib2 = fibonacci();
return $fib1 + $fib2;
}
BEGIN {
hello();
my $i = 0;
while ($i <= 10) {
$arg = $i;
printf("fib(%d) = %d\n", $i, fibonacci());
$i++;
}
}
% gcc fibonacci.pl.raku.c -o fibonacci % ./fibonacci Hello, welcome to the Fibonacci Numbers! This program is all, valid C and C++ and Perl and Raku code! It calculates all fibonacci numbers from 0 to 9! fib(0) = 0 fib(1) = 1 fib(2) = 1 fib(3) = 2 fib(4) = 3 fib(5) = 5 fib(6) = 8 fib(7) = 13 fib(8) = 21 fib(9) = 34 fib(10) = 55 % g++ fibonacci.pl.raku.c -o fibonacci % ./fibonacci Hello, welcome to the Fibonacci Numbers! This program is all, valid C and C++ and Perl and Raku code! It calculates all fibonacci numbers from 0 to 9! fib(0) = 0 fib(1) = 1 fib(2) = 1 fib(3) = 2 fib(4) = 3 fib(5) = 5 fib(6) = 8 fib(7) = 13 fib(8) = 21 fib(9) = 34 fib(10) = 55
% perl fibonacci.pl.raku.c Hello, welcome to the Fibonacci Numbers! This program is all, valid C and C++ and Perl and Raku code! It calculates all fibonacci numbers from 0 to 9! fib(0) = 0 fib(1) = 1 fib(2) = 1 fib(3) = 2 fib(4) = 3 fib(5) = 5 fib(6) = 8 fib(7) = 13 fib(8) = 21 fib(9) = 34 fib(10) = 55 % raku fibonacci.pl.raku.c Hello, welcome to the Fibonacci Numbers! This program is all, valid C and C++ and Perl and Raku code! It calculates all fibonacci numbers from 0 to 9! fib(0) = 0 fib(1) = 1 fib(2) = 1 fib(3) = 2 fib(4) = 3 fib(5) = 5 fib(6) = 8 fib(7) = 13 fib(8) = 21 fib(9) = 34 fib(10) = 55
a'! _,,_ a'! _,,_ a'! _,,_
\\_/ \ \\_/ \ \\_/ \.-,
\, /-( /'-,\, /-( /'-, \, /-( /
//\ //\\ //\ //\\ //\ //\\jrei
# Starting ./bin/perldaemon start (or shortcut ./control start) # Stopping ./bin/perldaemon stop (or shortcut ./control stop) # Alternatively: Starting in foreground ./bin/perldaemon start daemon.daemonize=no (or shortcut ./control foreground)
pb@titania:~/svn/utils/perldaemon/trunk$ ./control keys # Path to the logfile daemon.logfile=./log/perldaemon.log # The amount of seconds until the next event look takes place daemon.loopinterval=1 # Path to the modules dir daemon.modules.dir=./lib/PerlDaemonModules # Specifies either the daemon should run in daemon or foreground mode daemon.daemonize=yes # Path to the pidfile daemon.pidfile=./run/perldaemon.pid # Each module should run every run interval seconds daemon.modules.runinterval=3 # Path to the alive file (is touched every loop interval seconds, usable for monitoring) daemon.alivefile=./run/perldaemon.alive # Specifies the working directory daemon.wd=./
$ ./control keys | grep daemon.loopinterval daemon.loopinterval=1 $ ./control keys daemon.loopinterval=10 | grep daemon.loopinterval daemon.loopinterval=10 $ ./control start daemon.loopinterval=10; sleep 10; tail -n 2 log/perldaemon.log Starting daemon now... Mon Jun 13 11:29:27 2011 (PID 2838): Triggering PerlDaemonModules::ExampleModule (last triggered before 10.002106s; carry: 7.002106s; wanted interval: 3s) Mon Jun 13 11:29:27 2011 (PID 2838): ExampleModule Test 2 $ ./control stop Stopping daemon now...
$ ./control keys daemon.loopinterval=10 > new.conf; mv new.conf conf/perldaemon.conf
package PerlDaemonModules::ExampleModule;
use strict;
use warnings;
sub new ($$$) {
my ($class, $conf) = @_;
my $self = bless { conf => $conf }, $class;
# Store some private module stuff
$self->{counter} = 0;
return $self;
}
# Runs periodically in a loop (set interval in perldaemon.conf)
sub do ($) {
my $self = shift;
my $conf = $self->{conf};
my $logger = $conf->{logger};
# Calculate some private module stuff
my $count = ++$self->{counter};
$logger->logmsg("ExampleModule Test $count");
}
1;
cd ./lib/PerlDaemonModules/ cp ExampleModule.pm YourModule.pm vi YourModule.pm cd - ./bin/perldaemon restart (or shortcurt ./control restart)
____ _ __
/ / _|_ _ _ __ ___ _ _ ___ __ _| |__ / _|_ _
/ / |_| | | | '_ \ / _ \ | | | |/ _ \/ _` | '_ \ | |_| | | |
_ / /| _| |_| | |_) | __/ | |_| | __/ (_| | | | |_| _| |_| |
(_)_/ |_| \__, | .__/ \___| \__, |\___|\__,_|_| |_(_)_| \__, |
|___/|_| |___/ |___/
typedef struct {
Tupel *p_tupel_argv; // Contains command line options
List *p_list_token; // Initial list of token
Hash *p_hash_syms; // Symbol table
char *c_basename;
} Fype;
Fype*
fype_new() {
Fype *p_fype = malloc(sizeof(Fype));
p_fype->p_hash_syms = hash_new(512);
p_fype->p_list_token = list_new();
p_fype->p_tupel_argv = tupel_new();
p_fype->c_basename = NULL;
garbage_init();
return (p_fype);
}
void
fype_delete(Fype *p_fype) {
argv_tupel_delete(p_fype->p_tupel_argv);
hash_iterate(p_fype->p_hash_syms, symbol_cleanup_hash_syms_cb);
hash_delete(p_fype->p_hash_syms);
list_iterate(p_fype->p_list_token, token_ref_down_cb);
list_delete(p_fype->p_list_token);
if (p_fype->c_basename)
free(p_fype->c_basename);
garbage_destroy();
}
int
fype_run(int i_argc, char **pc_argv) {
Fype *p_fype = fype_new();
// argv: Maintains command line options
argv_run(p_fype, i_argc, pc_argv);
// scanner: Creates a list of token
scanner_run(p_fype);
// interpret: Interpret the list of token
interpret_run(p_fype);
fype_delete(p_fype);
return (0);
}
my foo = 1 + 2; say foo; my bar = 12, baz = foo; say 1 + bar; say bar; my baz; say baz; # Will print out 0
ifnot defined foo {
say "No foo yet defined";
}
my foo = 1;
if defined foo {
put "foo is defined and has the value ";
say foo;
}
my foo = "foo"; my bar = \foo; foo = "bar"; # The synonym variable should now also set to "bar" assert "bar" == bar;
# Create a new procedure baz
proc baz { say "I am baz"; }
# Make a synonym baz, and undefine baz
my bay = \baz;
undef baz;
# bay still has a reference of the original procedure baz
bay; # this prints aut "I am baz"
my foo = 1; say syms foo; # Prints 1 my baz = \foo; say syms foo; # Prints 2 say syms baz; # Prints 2 undef baz; say syms foo; # Prints 1
my bar = 3, foo = 1 + 2; say foo; exit foo - bar;
(any) <any> + <any> (any) <any> - <any> (any) <any> * <any> (any) <any> / <any> (integer) <any> == <any> (integer) <any> != <any> (integer) <any> <= <any> (integer) <any> gt <any> (integer) <any> <> <any> (integer) <any> gt <any> (integer) not <any>
(integer) <any> :< <any> (integer) <any> :> <any> (integer) <any> and <any> (integer) <any> or <any> (integer) <any> xor <any>
(number) neg <number>
(integer) no <integer>
(integer) yes <integer>
# Prints out 1, because foo is not defined
if yes { say no defined foo; }
if <expression> { <statements> }
ifnot <expression> { <statements> }
while <expression> { <statements> }
until <expression> { <statements> }
my foo = 1;
{
# Prints out 1
put defined foo;
{
my bar = 2;
# Prints out 1
put defined bar;
# Prints out all available symbols at this
# point to stdout. Those are: bar and foo
scope;
}
# Prints out 0
put defined bar;
my baz = 3;
}
# Prints out 0
say defined bar;
./fype -e ’my global; func foo { my var4; func bar { my var2, var3; func baz { my var1; scope; } baz; } bar; } foo;’
Scopes:
Scope stack size: 3
Global symbols:
SYM_VARIABLE: global (id=00034, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
SYM_FUNCTION: foo
Local symbols:
SYM_VARIABLE: var1 (id=00038, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
1 level(s) up:
SYM_VARIABLE: var2 (id=00036, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
SYM_VARIABLE: var3 (id=00037, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
SYM_FUNCTION: baz
2 level(s) up:
SYM_VARIABLE: var4 (id=00035, line=-0001, pos=-001, type=TT_INTEGER, dval=0.000000, refs=-1)
SYM_FUNCTION: bar
(integer) defined <identifier>
(integer) undef <identifier>
(void) end
(void) exit <integer>
(integer) fork
my pid = fork;
if pid {
put "I am the parent process; child has the pid ";
say pid;
} ifnot pid {
say "I am the child process";
}
(integer) GC
(any) put <any>
(any) say <any>
(void) ln
proc foo {
say 1 + a * 3 + b;
my c = 6;
}
my a = 2, b = 4;
foo; # Run the procedure. Print out "11\n"
say c; # Print out "6\n";
proc foo {
say "I am foo";
undef bar;
proc bar {
say "I am bar";
}
}
# Here bar would produce an error because
# the proc is not yet defined!
# bar;
foo; # Here the procedure foo will define the procedure bar!
bar; # Now the procedure bar is defined!
foo; # Here the procedure foo will redefine bar again!
func foo {
say 1 + a * 3 + b;
my c = 6;
}
my a = 2, b = 4;
foo; # Run the procedure. Print out "11\n"
say c; # Will produce an error because c is out of scope!
func foo {
func bar {
say "Hello i am nested";
}
bar; # Calling nested
}
foo;
bar; # Will produce an error because bar is out of scope!
func bar { say ”bar” }
my foo = [bar, 1, 4/2, double ”3”, [”A”, [”BA”, ”BB”]]];
say foo;
% ./fype arrays.fy bar 01 2 3.000000 A BA BB
Published at 2010-05-07T08:17:59+01:00
_____|~~\_____ _____________
_-~ \ | \
_- | ) \ |__/ \ \
_- ) | | | \ \
_- | ) / |--| | |
__-_______________ /__/_______| |_________
( |---- | |
`---------------'--\\\\ .`--' -Glyde-
`||||
In contrast to Haskell, Standard SML does not use lazy evaluation by default but an eager evaluation.
https://en.wikipedia.org/wiki/Eager_evaluationYou can solve specific problems with lazy evaluation easier than with eager evaluation. For example, you might want to list the number Pi or another infinite list of something. With the help of lazy evaluation, each element of the list is calculated when it is accessed first, but not earlier.
However, it is possible to emulate lazy evaluation in most eager evaluation languages. This is how it is done with Standard ML (with some play with an infinite list of natural number tuples filtering out 0 elements):
type ’a lazy = unit -> ’a;
fun force (f:’a lazy) = f ();
fun delay x = (fn () => x) : ’a lazy;
datatype ’a sequ = NIL | CONS of ’a * ’a sequ lazy;
fun first 0 s = []
| first n NIL = []
| first n (CONS (i,r)) = i :: first (n-1) (force r);
fun filters p NIL = NIL
| filters p (CONS (x,r)) =
if p x
then CONS (x, fn () => filters p (force r))
else
filters p (force r);
fun nat_pairs () =
let
fun from_pair (x,0) =
CONS ((x,0), fn () => from_pair (0,x+1))
| from_pair (up,dn) =
CONS ((up,dn), fn () => from_pair (up+1,dn-1))
in from_pair (0,0)
end;
(* Test
val test = first 10 (nat_pairs ())
*)
fun nat_pairs_not_null () =
filters (fn (x,y) => x > 0 andalso y > 0) (nat_pairs ());
(* Test
val test = first 10 (nat_pairs_not_null ());
*)
As Haskell already uses lazy evaluation by default, there is no need to construct a new data type. Lists in Haskell are lazy by default. You will notice that the code is also much shorter and easier to understand than the SML version.
{- Just to make it look like the ML example -}
first = take
filters = filter
{- Implementation -}
nat_pairs = from_pair 0 0
where
from_pair x 0 = [x,0] : from_pair 0 (x+1)
from_pair up dn = [up,dn] : from_pair (up+1) (dn-1)
{- Test:
first 10 nat_pairs
-}
nat_pairs_not_null = filters (\[x,y] -> x > 0 && y > 0) nat_pairs
{- Test:
first 10 nat_pairs_not_null
-}
E-Mail your comments to hi@paul.cyou :-)
Back to the main sitedatatype ’a multi = EMPTY | ELEM of ’a | UNION of ’a multi * ’a multi
data (Eq a) => Multi a
= Empty
| Elem a
| Union (Multi a) (Multi a)
deriving Show
fun number (EMPTY) _ = 0
| number (ELEM x) w = if x = w then 1 else 0
| number (UNION (x,y)) w = (number x w) + (number y w)
fun test_number w = number (UNION (EMPTY, \
UNION (ELEM 4, UNION (ELEM 6, \
UNION (UNION (ELEM 4, ELEM 4), EMPTY))))) w
number Empty _ = 0
number (Elem x) w = if x == w then 1 else 0
test_number w = number (Union Empty \
(Union (Elem 4) (Union (Elem 6) \
(Union (Union (Elem 4) (Elem 4)) Empty)))) w
fun simplify (UNION (x,y)) =
let fun is_empty (EMPTY) = true | is_empty _ = false
val x’ = simplify x
val y’ = simplify y
in if (is_empty x’) andalso (is_empty y’)
then EMPTY
else if (is_empty x’)
then y’
else if (is_empty y’)
then x’
else UNION (x’, y’)
end
| simplify x = x
simplify (Union x y)
| (isEmpty x’) && (isEmpty y’) = Empty
| isEmpty x’ = y’
| isEmpty y’ = x’
| otherwise = Union x’ y’
where
isEmpty Empty = True
isEmpty _ = False
x’ = simplify x
y’ = simplify y
simplify x = x
fun delete_all m w =
let fun delete_all’ (ELEM x) = if x = w then EMPTY else ELEM x
| delete_all’ (UNION (x,y)) = UNION (delete_all’ x, delete_all’ y)
| delete_all’ x = x
in simplify (delete_all’ m)
end
delete_all m w = simplify (delete_all’ m)
where
delete_all’ (Elem x) = if x == w then Empty else Elem x
delete_all’ (Union x y) = Union (delete_all’ x) (delete_all’ y)
delete_all’ x = x
fun delete_one m w =
let fun delete_one’ (UNION (x,y)) =
let val (x’, deleted) = delete_one’ x
in if deleted
then (UNION (x’, y), deleted)
else let val (y’, deleted) = delete_one’ y
in (UNION (x, y’), deleted)
end
end
| delete_one’ (ELEM x) =
if x = w then (EMPTY, true) else (ELEM x, false)
| delete_one’ x = (x, false)
val (m’, _) = delete_one’ m
in simplify m’
end
delete_one m w = do
let (m’, _) = delete_one’ m
simplify m’
where
delete_one’ (Union x y) =
let (x’, deleted) = delete_one’ x
in if deleted
then (Union x’ y, deleted)
else let (y’, deleted) = delete_one’ y
in (Union x y’, deleted)
delete_one’ (Elem x) =
if x == w then (Empty, True) else (Elem x, False)
delete_one’ x = (x, False)
fun make_map_fn f1 = fn (x,y) => f1 x :: y make_map_fn f1 = \x y -> f1 x : y fun make_filter_fn f1 = fn (x,y) => if f1 x then x :: y else y make_filter_fn f1 = \x y -> if f1 then x : y else y fun my_map f l = foldr (make_map_fn f) [] l my_map f l = foldr (make_map_fn f) [] l fun my_filter f l = foldr (make_filter_fn f) [] l my_filter f l = foldr (make_filter_fn f) [] l
Published at 2008-12-29T09:10:41+00:00; Updated at 2021-12-01
_
|E]
.-|=====-.
| | mail |
___|________|
||
||
|| www
,;, || )_(,;;;,
<_> \ || \|/ \_/
\|/ \\|| \\| |//
_jgs_\|//_\\|///_\V/_\|//__
Art by Joan Stark
The last week I was in Vidin, Bulgaria with no internet access and I had to fix my MTA (Postfix) at host.0.buetow.org which serves E-Mail for all my customers at P. B. Labs. Good, that I do not guarantee high availability on my web services (I've to do a full time job somewhere else too).
My first attempt to find an internet café, which was working during Christmastime, failed. However, I found with my N95 phone lots of free WLAN hotspots. The hotspots refused me logging into my server using SSH as I have configured a non-standard port for SSH for security reasons. Without knowing the costs, I used the GPRS internet access of my German phone provider (yes, I had to pay roaming fees).

With Putty for N95 and configuring Postfix with Vim and the T9 input mechanism, I managed to fix the problem. But it took half of an hour:
It was a pain in the ass. My next mobile phone MUST have a full QWERTY keyboard. This would have made my life lots easier. :)
At the moment I am in Sofia, Bulgaria. Here I can use at least an unprotected WLAN hotspot which belongs to one of the neighbours which I don’t know in person, and it is not blocking any port at all :)
E-Mail your comments to hi@paul.cyou :-)
Back to the main site
'\|/' *
-- * -----
/|\ ____
' | ' {_ o^> *
: -_ /)
: ( ( .-''`'.
. \ \ / \
. \ \ / \
\ `-' `'.
\ . ' / `.
\ ( \ ) ( .')
,, t '. | / | (
'|``_/^\___ '| |`'-..-'| ( ()
_~~|~/_|_|__/|~~~~~~~ | / ~~~~~ | | ~~~~~~~~
-_ |L[|]L|/ | |\ MJP ) )
( |( / /|
~~ ~ ~ ~~~~ | /\\ / /| |
|| \\ _/ / | |
~ ~ ~~~ _|| (_/ (___)_| |Nov291999
(__) (____)
#!/usr/bin/perl
# (C) 2006 by Paul C. Buetow
goto library for study $math;
BEGIN { s/earching/ books/
and read $them, $at, $the } library:
our $topics, cos and tan,
require strict; import { of, tied $patience };
do { int'egrate'; sub trade; };
do { exp'onentize' and abs'olutize' };
study and study and study and study;
foreach $topic ({of, math}) {
you, m/ay /go, to, limits }
do { not qw/erk / unless $success
and m/ove /o;$n and study };
do { int'egrate'; sub trade; };
do { exp'onentize' and abs'olutize' };
study and study and study and study;
grep /all/, exp'onents' and cos'inuses';
/seek results/ for @all, log'4rithms';
'you' =~ m/ay /go, not home
unless each %book ne#ars
$completion;
do { int'egrate'; sub trade; };
do { exp'onentize' and abs'olutize' };
#at
home: //ig,'nore', time and sleep $very =~ s/tr/on/g;
__END__
#!/usr/bin/perl
# (C) 2006 by Paul C. Buetow
Christmas:{time;#!!!
Children: do tell $wishes;
Santa: for $each (@children) {
BEGIN { read $each, $their, wishes and study them; use Memoize#ing
} use constant gift, 'wrapping';
package Gifts; pack $each, gift and bless $each and goto deliver
or do import if not local $available,!!! HO, HO, HO;
redo Santa, pipe $gifts, to_childs;
redo Santa and do return if last one, is, delivered;
deliver: gift and require diagnostics if our $gifts ,not break;
do{ use NEXT; time; tied $gifts} if broken and dump the, broken, ones;
The_children: sleep and wait for (each %gift) and try { to => untie $gifts };
redo Santa, pipe $gifts, to_childs;
redo Santa and do return if last one, is, delivered;
The_christmas_tree: formline s/ /childrens/, $gifts;
alarm and warn if not exists $Christmas{ tree}, @t, $ENV{HOME};
write <<EMail
to the parents to buy a new christmas tree!!!!111
and send the
EMail
;wait and redo deliver until defined local $tree;
redo Santa, pipe $gifts, to_childs;
redo Santa and do return if last one, is, delivered ;}
END {} our $mission and do sleep until next Christmas ;}
__END__
This is perl, v5.8.8 built for i386-freebsd-64int
#!/usr/bin/perl
# (C) 2007 by Paul C. Buetow
BEGIN{} goto mall for $shopping;
m/y/; mall: seek$s, cool products(), { to => $sell };
for $their (@business) { to:; earn:; a:; lot:; of:; money: }
do not goto home and exit mall if exists $new{product};
foreach $of (q(uality rich products)){} package products;
our $news; do tell cool products() and do{ sub#tract
cool{ $products and shift @the, @bad, @ones;
do bless [q(uality)], $products
and return not undef $stuff if not (local $available) }};
do { study and study and study for cool products() }
and do { seek $all, cool products(), { to => $buy } };
do { write $them, $down } and do { order: foreach (@case) { package s } };
goto home if not exists $more{money} or die q(uerying) ;for( @money){};
at:;home: do { END{} and:; rest:; a:; bit: exit $shopping }
and sleep until unpack$ing, cool products();
__END__
This is perl, v5.8.8 built for i386-freebsd-64int