summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--about/resources.gmi204
-rw-r--r--about/showcase.gmi970
-rw-r--r--about/showcase.gmi.tpl971
-rw-r--r--about/showcase/debroid/image-1.png36
-rw-r--r--gemfeed/2025-06-22-task-samurai.gmi28
-rw-r--r--gemfeed/2025-06-22-task-samurai.gmi.tpl26
-rw-r--r--gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-6.gmi443
-rw-r--r--gemfeed/atom.xml30
-rw-r--r--index.gmi2
-rw-r--r--uptime-stats.gmi44
10 files changed, 1279 insertions, 1475 deletions
diff --git a/about/resources.gmi b/about/resources.gmi
index cccf20eb..108eb000 100644
--- a/about/resources.gmi
+++ b/about/resources.gmi
@@ -35,105 +35,105 @@ You won't find any links on this site because, over time, the links will break.
In random order:
-* Leanring eBPF; Liz Rice; O'Reilly
-* The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional
-* Ultimate Go Notebook; Bill Kennedy
-* Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly
-* The Docker Book; James Turnbull; Kindle
-* Systemprogrammierung in Go; Frank Müller; dpunkt
-* Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O'Reilly
-* Raku Recipes; J.J. Merelo; Apress
-* Tmux 2: Productive Mouse-free Development; Brain P. Hogan; The Pragmatic Programmers
* Modern Perl; Chromatic ; Onyx Neon Press
-* Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O'Reilly
-* Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications
-* Site Reliability Engineering; How Google runs production systems; O'Reilly
-* DNS and BIND; Cricket Liu; O'Reilly
-* Funktionale Programmierung; Peter Pepper; Springer
+* Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers
+* Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner
* Effective awk programming; Arnold Robbins; O'Reilly
-* Higher Order Perl; Mark Dominus; Morgan Kaufmann
+* 97 things every SRE should know; Emil Stolarsky, Jaime Woo; O'Reilly
+* DNS and BIND; Cricket Liu; O'Reilly
* Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall & Jon Orwant; O'Reilly
-* Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt
-* Polished Ruby Programming; Jeremy Evans; Packt Publishing
-* 100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications
-* Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner
+* 21st Century C: C Tips from the New School; Ben Klemens; O'Reilly
+* Data Science at the Command Line; Jeroen Janssens; O'Reilly
+* Concurrency in Go; Katherine Cox-Buday; O'Reilly
+* Perl New Features; Joshua McAdams, brian d foy; Perl School
+* The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton
* Programming Ruby 3.3 (5th Edition); Noel Rappin, with Dave Thomas; The Pragmatic Bookshelf
-* Effective Java; Joshua Bloch; Addison-Wesley Professional
-* Java ist auch eine Insel; Christian Ullenboom;
-* Terraform Cookbook; Mikael Krief; Packt Publishing
-* C++ Programming Language; Bjarne Stroustrup;
-* 97 things every SRE should know; Emil Stolarsky, Jaime Woo; O'Reilly
* DevOps And Site Reliability Engineering Handbook; Stephen Fleming; Audible
+* Pro Puppet; James Turnbull, Jeffrey McCune; Apress
* Raku Fundamentals; Moritz Lenz; Apress
-* Concurrency in Go; Katherine Cox-Buday; O'Reilly
-* Perl New Features; Joshua McAdams, brian d foy; Perl School
+* Ultimate Go Notebook; Bill Kennedy
+* Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt
+* Leanring eBPF; Liz Rice; O'Reilly
+* Higher Order Perl; Mark Dominus; Morgan Kaufmann
+* Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O'Reilly
+* Polished Ruby Programming; Jeremy Evans; Packt Publishing
+* Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press
+* Systemprogrammierung in Go; Frank Müller; dpunkt
+* The DevOps Handbook; Gene Kim, Jez Humble, Patrick Debois, John Willis; Audible
+* The Docker Book; James Turnbull; Kindle
* The Kubernetes Book; Nigel Poulton; Unabridged Audiobook
-* Data Science at the Command Line; Jeroen Janssens; O'Reilly
+* The Pragmatic Programmer; David Thomas; Addison-Wesley
+* Funktionale Programmierung; Peter Pepper; Springer
* Kubernetes Cookbook; Sameer Naik, Sébastien Goasguen, Jonathan Michaux; O'Reilly
-* 21st Century C: C Tips from the New School; Ben Klemens; O'Reilly
+* Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly
+* Terraform Cookbook; Mikael Krief; Packt Publishing
+* Site Reliability Engineering; How Google runs production systems; O'Reilly
+* C++ Programming Language; Bjarne Stroustrup;
+* The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress
+* 100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications
+* Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O'Reilly
+* Raku Recipes; J.J. Merelo; Apress
* Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press
-* Pro Puppet; James Turnbull, Jeffrey McCune; Apress
-* Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press
-* Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers
* Distributed Systems: Principles and Paradigms; Andrew S. Tanenbaum; Pearson
+* Effective Java; Joshua Bloch; Addison-Wesley Professional
+* Java ist auch eine Insel; Christian Ullenboom;
+* Tmux 2: Productive Mouse-free Development; Brain P. Hogan; The Pragmatic Programmers
* Developing Games in Java; David Brackeen and others...; New Riders
-* The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress
-* The Pragmatic Programmer; David Thomas; Addison-Wesley
-* The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton
-* The DevOps Handbook; Gene Kim, Jez Humble, Patrick Debois, John Willis; Audible
+* The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional
+* Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications
## Technical references
I didn't read them from the beginning to the end, but I am using them to look up things. The books are in random order:
-* Groovy Kurz & Gut; Joerg Staudemeier; O'Reilly
+* BPF Performance Tools - Linux System and Application Observability, Brendan Gregg; Addison Wesley
+* Implementing Service Level Objectives; Alex Hidalgo; O'Reilly
* The Linux Programming Interface; Michael Kerrisk; No Starch Press
-* Algorithms; Robert Sedgewick, Kevin Wayne; Addison Wesley
-* Relayd and Httpd Mastery; Michael W Lucas
* Go: Design Patterns for Real-World Projects; Mat Ryer; Packt
+* Groovy Kurz & Gut; Joerg Staudemeier; O'Reilly
+* Relayd and Httpd Mastery; Michael W Lucas
+* Algorithms; Robert Sedgewick, Kevin Wayne; Addison Wesley
* Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O'Reilly
-* Implementing Service Level Objectives; Alex Hidalgo; O'Reilly
-* BPF Performance Tools - Linux System and Application Observability, Brendan Gregg; Addison Wesley
## Self-development and soft-skills books
In random order:
-* The Good Enough Job; Simone Stolzoff; Ebury Edge
-* The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select
-* 101 Essays that change the way you think; Brianna Wiest; Audiobook
-* Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press
-* Staff Engineer: Leadership beyond the management track; Will Larson; Audiobook
-* The Power of Now; Eckhard Tolle; Yellow Kite
-* Never Split the Difference; Chris Voss, Tahl Raz; Random House Business
-* Stop starting, start finishing; Arne Roock; Lean-Kanban University
-* Getting Things Done; David Allen
-* The Bullet Journal Method; Ryder Carroll; Fourth Estate
-* Slow Productivity; Cal Newport; Penguin Random House
-* The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books
-* Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne
* So Good They Can't Ignore You; Cal Newport; Business Plus
-* Deep Work; Cal Newport; Piatkus
-* Ultralearning; Anna Laurent; Self-published via Amazon
-* Ultralearning; Scott Young; Thorsons
-* Soft Skills; John Sommez; Manning Publications
+* Stop starting, start finishing; Arne Roock; Lean-Kanban University
+* Psycho-Cybernetics; Maxwell Maltz; Perigee Books
+* Influence without Authority; A. Cohen, D. Bradford; Wiley
* The Complete Software Developer's Career Guide; John Sonmez; Unabridged Audiobook
-* The Obstacle Is The Way; Ryan Holiday; Profile Books Ltd
-* Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing
-* Time Management for System Administrators; Thomas A. Limoncelli; O'Reilly
-* Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)
* Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion
-* Eat That Frog!; Brian Tracy; Hodder Paperbacks
-* The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)
-* The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK
-* Influence without Authority; A. Cohen, D. Bradford; Wiley
-* The Joy of Missing Out; Christina Crook; New Society Publishers
-* Eat That Frog; Brian Tracy
+* Ultralearning; Anna Laurent; Self-published via Amazon
+* Staff Engineer: Leadership beyond the management track; Will Larson; Audiobook
+* Deep Work; Cal Newport; Piatkus
+* Meditation for Mortals, Oliver Burkeman, Audiobook
+* The Bullet Journal Method; Ryder Carroll; Fourth Estate
* Coders at Work - Reflections on the craft of programming, Peter Seibel and Mitchell Dorian et al., Audiobook
* Atomic Habits; James Clear; Random House Business
-* Meditation for Mortals, Oliver Burkeman, Audiobook
+* The Obstacle Is The Way; Ryan Holiday; Profile Books Ltd
+* Time Management for System Administrators; Thomas A. Limoncelli; O'Reilly
+* Soft Skills; John Sommez; Manning Publications
+* Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press
+* Getting Things Done; David Allen
+* Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing
+* Eat That Frog; Brian Tracy
+* The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK
+* The Good Enough Job; Simone Stolzoff; Ebury Edge
* Digital Minimalism; Cal Newport; Portofolio Penguin
-* Psycho-Cybernetics; Maxwell Maltz; Perigee Books
+* Eat That Frog!; Brian Tracy; Hodder Paperbacks
+* The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books
+* The Joy of Missing Out; Christina Crook; New Society Publishers
+* The Power of Now; Eckhard Tolle; Yellow Kite
+* The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)
+* Slow Productivity; Cal Newport; Penguin Random House
+* The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select
+* Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)
+* Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne
+* Ultralearning; Scott Young; Thorsons
+* Never Split the Difference; Chris Voss, Tahl Raz; Random House Business
+* 101 Essays that change the way you think; Brianna Wiest; Audiobook
=> ../notes/index.gmi Here are notes of mine for some of the books
@@ -141,29 +141,29 @@ In random order:
Some of these were in-person with exams; others were online learning lectures only. In random order:
-* Structure and Interpretation of Computer Programs; Harold Abelson and more...;
-* Cloud Operations on AWS - Learn how to configure, deploy, maintain, and troubleshoot your AWS environments; 3-day online live training with labs; Amazon
-* Functional programming lecture; Remote University of Hagen
-* AWS Immersion Day; Amazon; 1-day interactive online training
-* F5 Loadbalancers Training; 2-day on-site training; F5, Inc.
* Apache Tomcat Best Practises; 3-day on-site training
+* Linux Security and Isolation APIs Training; Michael Kerrisk; 3-day on-site training
* Ultimate Go Programming; Bill Kennedy; O'Reilly Online
-* MySQL Deep Dive Workshop; 2-day on-site training
-* Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online
-* The Well-Grounded Rubyist Video Edition; David. A. Black; O'Reilly Online
* Red Hat Certified System Administrator; Course + certification (Although I had the option, I decided not to take the next course as it is more effective to self learn what I need)
-* Linux Security and Isolation APIs Training; Michael Kerrisk; 3-day on-site training
-* Scripting Vim; Damian Conway; O'Reilly Online
+* AWS Immersion Day; Amazon; 1-day interactive online training
+* F5 Loadbalancers Training; 2-day on-site training; F5, Inc.
* Protocol buffers; O'Reilly Online
-* The Ultimate Kubernetes Bootcamp; School of Devops; O'Reilly Online
+* Functional programming lecture; Remote University of Hagen
+* Structure and Interpretation of Computer Programs; Harold Abelson and more...;
* Developing IaC with Terraform (with Live Lessons); O'Reilly Online
+* The Ultimate Kubernetes Bootcamp; School of Devops; O'Reilly Online
+* Cloud Operations on AWS - Learn how to configure, deploy, maintain, and troubleshoot your AWS environments; 3-day online live training with labs; Amazon
+* Scripting Vim; Damian Conway; O'Reilly Online
+* Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online
+* The Well-Grounded Rubyist Video Edition; David. A. Black; O'Reilly Online
+* MySQL Deep Dive Workshop; 2-day on-site training
## Technical guides
These are not whole books, but guides (smaller or larger) which I found very useful. in random order:
-* Advanced Bash-Scripting Guide
* Raku Guide at https://raku.guide
+* Advanced Bash-Scripting Guide
* How CPUs work at https://cpu.land
## Podcasts
@@ -172,57 +172,57 @@ These are not whole books, but guides (smaller or larger) which I found very use
In random order:
-* Pratical AI
+* BSD Now [BSD]
+* Modern Mentor
+* Hidden Brain
+* Backend Banter
* The Pragmatic Engineer Podcast
-* The Changelog Podcast(s)
-* Fallthrough [Golang]
* Fork Around And Find Out
-* Dev Interrupted
-* Hidden Brain
-* The ProdCast (Google SRE Podcast)
+* Fallthrough [Golang]
+* The Changelog Podcast(s)
* Cup o' Go [Golang]
-* Deep Questions with Cal Newport
-* Modern Mentor
-* BSD Now [BSD]
+* The ProdCast (Google SRE Podcast)
* Maintainable
-* Backend Banter
+* Deep Questions with Cal Newport
+* Pratical AI
+* Dev Interrupted
### Podcasts I liked
I liked them but am not listening to them anymore. The podcasts have either "finished" (no more episodes) or I stopped listening to them due to time constraints or a shift in my interests.
-* FLOSS weekly
* Java Pub House
-* CRE: Chaosradio Express [german]
* Ship It (predecessor of Fork Around And Find Out)
-* Modern Mentor
* Go Time (predecessor of fallthrough)
+* FLOSS weekly
+* CRE: Chaosradio Express [german]
+* Modern Mentor
## Newsletters I like
This is a mix of tech and non-tech newsletters I am subscribed to. In random order:
-* Golang Weekly
-* Register Spill
* The Pragmatic Engineer
-* Changelog News
+* Register Spill
* Andreas Brandhorst Newsletter (Sci-Fi author)
* The Imperfectionist
-* Applied Go Weekly Newsletter
* The Valuable Dev
* Monospace Mentor
-* byteSizeGo
-* Ruby Weekly
* VK Newsletter
+* Changelog News
+* Golang Weekly
+* Ruby Weekly
+* byteSizeGo
+* Applied Go Weekly Newsletter
## Magazines I like(d)
This is a mix of tech I like(d). I may not be a current subscriber, but now and then, I buy an issue. In random order:
-* Linux User
+* LWN (online only)
* Linux Magazine
+* Linux User
* freeX (not published anymore)
-* LWN (online only)
# Formal education
diff --git a/about/showcase.gmi b/about/showcase.gmi
index 3acc9de5..5672e1c3 100644
--- a/about/showcase.gmi
+++ b/about/showcase.gmi
@@ -1,6 +1,6 @@
# Project Showcase
-Generated on: 2025-07-09
+Generated on: 2025-07-12
This page showcases my side projects, providing an overview of what each project does, its technical implementation, and key metrics. Each project summary includes information about the programming languages used, development activity, and licensing. The projects are ordered by recent activity, with the most actively maintained projects listed first.
@@ -13,9 +13,8 @@ This page showcases my side projects, providing an overview of what each project
* ⇢ ⇢ ⇢ timr
* ⇢ ⇢ ⇢ tasksamurai
* ⇢ ⇢ ⇢ rexfiles
-* ⇢ ⇢ ⇢ dtail
-* ⇢ ⇢ ⇢ wireguardmeshgenerator
* ⇢ ⇢ ⇢ ior
+* ⇢ ⇢ ⇢ wireguardmeshgenerator
* ⇢ ⇢ ⇢ ds-sim
* ⇢ ⇢ ⇢ sillybench
* ⇢ ⇢ ⇢ gos
@@ -25,12 +24,13 @@ This page showcases my side projects, providing an overview of what each project
* ⇢ ⇢ ⇢ quicklogger
* ⇢ ⇢ ⇢ docker-gpodder-sync-server
* ⇢ ⇢ ⇢ terraform
+* ⇢ ⇢ ⇢ gogios
* ⇢ ⇢ ⇢ docker-radicale-server
* ⇢ ⇢ ⇢ docker-anki-sync-server
* ⇢ ⇢ ⇢ gorum
* ⇢ ⇢ ⇢ guprecords
-* ⇢ ⇢ ⇢ gogios
* ⇢ ⇢ ⇢ randomjournalpage
+* ⇢ ⇢ ⇢ dtail
* ⇢ ⇢ ⇢ sway-autorotate
* ⇢ ⇢ ⇢ photoalbum
* ⇢ ⇢ ⇢ algorithms
@@ -54,42 +54,43 @@ This page showcases my side projects, providing an overview of what each project
* ⇢ ⇢ ⇢ japi
* ⇢ ⇢ ⇢ perl-poetry
* ⇢ ⇢ ⇢ ipv6test
-* ⇢ ⇢ ⇢ cpuinfo
* ⇢ ⇢ ⇢ loadbars
+* ⇢ ⇢ ⇢ cpuinfo
* ⇢ ⇢ ⇢ perldaemon
* ⇢ ⇢ ⇢ awksite
* ⇢ ⇢ ⇢ jsmstrade
* ⇢ ⇢ ⇢ netcalendar
* ⇢ ⇢ ⇢ ychat
-* ⇢ ⇢ ⇢ vs-sim
* ⇢ ⇢ ⇢ hsbot
* ⇢ ⇢ ⇢ fype
+* ⇢ ⇢ ⇢ vs-sim
## Overall Statistics
* 📦 Total Projects: 55
-* 📊 Total Commits: 10,405
-* 📈 Total Lines of Code: 231,007
-* 📄 Total Lines of Documentation: 24,381
-* 💻 Languages: Java (23.7%), Go (19.2%), C++ (16.1%), C/C++ (8.9%), C (8.3%), Perl (7.3%), Shell (6.4%), Config (2.0%), HTML (2.0%), Ruby (1.2%), HCL (1.2%), Make (0.8%), Python (0.7%), CSS (0.6%), Raku (0.4%), JSON (0.3%), XML (0.3%), Haskell (0.3%), YAML (0.2%), TOML (0.1%)
-* 📚 Documentation: Text (46.3%), Markdown (39.6%), LaTeX (14.1%)
-* 🤖 AI-Assisted Projects: 6 out of 55 (10.9% AI-assisted, 89.1% human-only)
-* 🚀 Release Status: 31 released, 24 experimental (56.4% with releases, 43.6% experimental)
+* 📊 Total Commits: 10,446
+* 📈 Total Lines of Code: 211,600
+* 📄 Total Lines of Documentation: 21,802
+* 💻 Languages: Go (20.2%), Java (19.1%), C++ (17.6%), C/C++ (9.9%), Perl (8.1%), C (7.1%), Shell (6.9%), Config (2.2%), HTML (2.1%), Ruby (1.3%), HCL (1.3%), Make (0.9%), Python (0.8%), CSS (0.7%), Raku (0.6%), JSON (0.4%), XML (0.3%), Haskell (0.3%), YAML (0.2%), TOML (0.1%)
+* 📚 Documentation: Text (52.5%), Markdown (45.2%), LaTeX (2.3%)
+* 🎵 Vibe-Coded Projects: 2 out of 55 (3.6%)
+* 🤖 AI-Assisted Projects (including vibe-coded): 7 out of 55 (12.7% AI-assisted, 87.3% human-only)
+* 🚀 Release Status: 33 released, 22 experimental (60.0% with releases, 40.0% experimental)
## Projects
### gitsyncer
-* 💻 Languages: Go (86.7%), Shell (11.4%), YAML (1.4%), JSON (0.5%)
+* 💻 Languages: Go (89.5%), Shell (8.9%), YAML (1.1%), JSON (0.4%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 60
-* 📈 Lines of Code: 6548
-* 📄 Lines of Documentation: 2338
-* 📅 Development Period: 2025-06-23 to 2025-07-09
-* 🔥 Recent Activity: 3.4 days (avg. age of last 42 commits)
+* 📊 Commits: 76
+* 📈 Lines of Code: 8340
+* 📄 Lines of Documentation: 2363
+* 📅 Development Period: 2025-06-23 to 2025-07-12
+* 🔥 Recent Activity: 2.5 days (avg. age of last 42 commits)
* ⚖️ License: BSD-2-Clause
* 🏷️ Latest Release: v0.5.0 (2025-07-09)
-* 🤖 AI-Assisted: This project was partially created with the help of generative AI
+* 🎵 Vibe-Coded: This project has been vibe coded
GitSyncer is a cross-platform repository synchronization tool that automatically keeps Git repositories in sync across multiple hosting platforms like GitHub, Codeberg, and private SSH servers. It solves the common problem of maintaining consistent code across different Git hosting services by cloning repositories, adding all configured platforms as remotes, and continuously merging and pushing changes bidirectionally while handling branch creation and conflict detection.
@@ -99,19 +100,14 @@ The tool is implemented in Go with a clean architecture that supports both indiv
=> https://codeberg.org/snonux/gitsyncer View on Codeberg
=> https://github.com/snonux/gitsyncer View on GitHub
-Go from `internal/cli/handlers.go`:
+Go from `internal/showcase/images.go`:
```AUTO
-func LoadConfig(configPath string) (*config.Config, error) {
- if configPath == "" {
- configPath = findDefaultConfigPath()
- if configPath == "" {
- return nil, fmt.Errorf("no configuration file found")
- }
- }
-
- fmt.Printf("Loaded configuration from: %s\n", configPath)
- return config.Load(configPath)
+func isGitHostedImage(url string) bool {
+ return strings.Contains(url, "github.com") ||
+ strings.Contains(url, "githubusercontent.com") ||
+ strings.Contains(url, "codeberg.org") ||
+ strings.Contains(url, "codeberg.page")
}
```
@@ -121,13 +117,13 @@ func LoadConfig(configPath string) (*config.Config, error) {
* 💻 Languages: Go (98.3%), YAML (1.7%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 19
+* 📊 Commits: 20
* 📈 Lines of Code: 873
* 📄 Lines of Documentation: 135
-* 📅 Development Period: 2025-06-25 to 2025-06-29
-* 🔥 Recent Activity: 12.9 days (avg. age of last 42 commits)
+* 📅 Development Period: 2025-06-25 to 2025-07-12
+* 🔥 Recent Activity: 15.4 days (avg. age of last 42 commits)
* ⚖️ License: BSD-2-Clause
-* 🧪 Status: Experimental (no releases yet)
+* 🏷️ Latest Release: v0.0.0 (2025-06-29)
* 🤖 AI-Assisted: This project was partially created with the help of generative AI
@@ -138,10 +134,14 @@ The project is implemented using a clean modular architecture with the CLI entry
=> https://codeberg.org/snonux/timr View on Codeberg
=> https://github.com/snonux/timr View on GitHub
-Go from `internal/version.go`:
+Go from `internal/live/live.go`:
```AUTO
-const Version = "v0.0.0"
+func tick() tea.Cmd {
+ return tea.Tick(time.Second, func(t time.Time) tea.Msg {
+ return tickMsg(t)
+ })
+}
```
---
@@ -150,14 +150,14 @@ const Version = "v0.0.0"
* 💻 Languages: Go (99.8%), YAML (0.2%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 215
+* 📊 Commits: 216
* 📈 Lines of Code: 6160
* 📄 Lines of Documentation: 162
-* 📅 Development Period: 2025-06-19 to 2025-07-08
-* 🔥 Recent Activity: 13.3 days (avg. age of last 42 commits)
+* 📅 Development Period: 2025-06-19 to 2025-07-12
+* 🔥 Recent Activity: 16.1 days (avg. age of last 42 commits)
* ⚖️ License: BSD-2-Clause
* 🏷️ Latest Release: v0.9.2 (2025-07-02)
-* 🤖 AI-Assisted: This project was partially created with the help of generative AI
+* 🎵 Vibe-Coded: This project has been vibe coded
=> showcase/tasksamurai/image-1.png tasksamurai screenshot
@@ -171,52 +171,23 @@ The implementation follows a clean architecture with clear separation of concern
=> https://codeberg.org/snonux/tasksamurai View on Codeberg
=> https://github.com/snonux/tasksamurai View on GitHub
-Go from `internal/ui/table.go`:
+Go from `internal/version.go`:
```AUTO
-func editDescriptionCmd(description string) tea.Cmd {
- return func() tea.Msg {
- tmpFile, err := os.CreateTemp("", "tasksamurai-desc-*.txt")
- if err != nil {
- return descEditDoneMsg{err: err, tempFile: ""}
- }
- tmpPath := tmpFile.Name()
-
- _, err = tmpFile.WriteString(description)
- tmpFile.Close()
- if err != nil {
- os.Remove(tmpPath)
- return descEditDoneMsg{err: err, tempFile: ""}
- }
-
- editor := os.Getenv("EDITOR")
- if editor == "" {
- editor = "vi"
- }
-
- c := exec.Command(editor, tmpPath)
- c.Stdin = os.Stdin
- c.Stdout = os.Stdout
- c.Stderr = os.Stderr
-
- return tea.ExecProcess(c, func(err error) tea.Msg {
- return descEditDoneMsg{err: err, tempFile: tmpPath}
- })()
- }
-}
+const Version = "0.9.2"
```
---
### rexfiles
-* 💻 Languages: Shell (34.7%), Perl (32.8%), Config (8.4%), CSS (8.2%), TOML (7.3%), Ruby (6.0%), Lua (1.8%), JSON (0.7%), INI (0.2%)
+* 💻 Languages: Perl (38.2%), Shell (30.6%), Config (8.0%), CSS (7.9%), TOML (7.0%), Ruby (5.7%), Lua (1.7%), JSON (0.7%), INI (0.1%)
* 📚 Documentation: Text (97.3%), Markdown (2.7%)
-* 📊 Commits: 875
-* 📈 Lines of Code: 3956
+* 📊 Commits: 876
+* 📈 Lines of Code: 4123
* 📄 Lines of Documentation: 854
-* 📅 Development Period: 2021-12-28 to 2025-07-09
-* 🔥 Recent Activity: 16.4 days (avg. age of last 42 commits)
+* 📅 Development Period: 2021-12-28 to 2025-07-12
+* 🔥 Recent Activity: 18.8 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -228,72 +199,63 @@ The project consists of three main components: **dotfiles** management for perso
=> https://codeberg.org/snonux/rexfiles View on Codeberg
=> https://github.com/snonux/rexfiles View on GitHub
-Shell from `frontends/scripts/sitestats.sh`:
+Perl from `frontends/scripts/foostats.pl`:
```AUTO
-STATSFILE=/tmp/sitestats.csv
-BOTSFILE=/tmp/sitebots.txt
-TOP=20
+sub write ( $path, $content ) {
+ open my $fh, '>', "$path.tmp"
+ or die "\nCannot open file: $!";
+ print $fh $content;
+ close $fh;
+
+ rename
+ "$path.tmp",
+ $path;
+}
```
---
-### dtail
+### ior
-* 💻 Languages: Go (93.9%), JSON (2.8%), C (2.0%), Make (0.5%), C/C++ (0.3%), Config (0.2%), Shell (0.2%), Docker (0.1%)
-* 📚 Documentation: Text (79.4%), Markdown (20.6%)
-* 📊 Commits: 1049
-* 📈 Lines of Code: 20091
-* 📄 Lines of Documentation: 5674
-* 📅 Development Period: 2020-01-09 to 2025-06-20
-* 🔥 Recent Activity: 52.4 days (avg. age of last 42 commits)
-* ⚖️ License: Apache-2.0
-* 🏷️ Latest Release: v4.2.0 (2023-06-21)
+* 💻 Languages: Go (81.0%), Raku (11.5%), C (4.4%), Make (1.7%), C/C++ (1.5%)
+* 📚 Documentation: Text (63.6%), Markdown (36.4%)
+* 📊 Commits: 330
+* 📈 Lines of Code: 7911
+* 📄 Lines of Documentation: 742
+* 📅 Development Period: 2024-01-18 to 2025-07-12
+* 🔥 Recent Activity: 56.3 days (avg. age of last 42 commits)
+* ⚖️ License: No license found
+* 🧪 Status: Experimental (no releases yet)
* 🤖 AI-Assisted: This project was partially created with the help of generative AI
-=> showcase/dtail/image-1.png dtail screenshot
+=> showcase/ior/image-1.png ior screenshot
-DTail is a distributed log processing system written in Go that allows DevOps engineers to tail, cat, and grep log files across thousands of servers concurrently. It provides secure access through SSH authentication and respects UNIX file system permissions, making it ideal for enterprise environments where log analysis needs to scale horizontally across large server fleets. The tool supports advanced features like compressed file handling (gzip/zstd) and distributed MapReduce aggregations for complex log analytics.
+Based on my analysis of the codebase, here's a comprehensive summary of the I/O Riot NG (ior) project:
-=> showcase/dtail/image-2.gif dtail screenshot
+=> showcase/ior/image-2.svg ior screenshot
-The system uses a client-server architecture where dtail servers run on target machines (listening on port 2222) and clients connect to multiple servers simultaneously. It can also operate in serverless mode for local operations. The implementation leverages SSH for secure communication, includes sophisticated connection throttling and resource management, and provides specialized tools (dcat, dgrep, dmap) for different log processing tasks. The MapReduce functionality supports SQL-like queries with server-side local aggregation and client-side final aggregation, enabling powerful distributed analytics across log data.
+**I/O Riot NG** is a Linux-based performance monitoring tool that uses eBPF (extended Berkeley Packet Filter) to trace synchronous I/O system calls and analyze their execution times. This tool is particularly valuable for system performance analysis, allowing developers and system administrators to visualize I/O bottlenecks through detailed flamegraphs. It serves as a modern successor to the original I/O Riot project, migrating from SystemTap/C to a Go/C/BPF implementation for better performance and maintainability.
-=> https://codeberg.org/snonux/dtail View on Codeberg
-=> https://github.com/snonux/dtail View on GitHub
+The architecture combines kernel-level tracing with user-space analysis: eBPF programs (`internal/c/ior.bpf.c`) attach to kernel tracepoints to capture syscall entry/exit events, which are then processed by a Go-based event loop (`internal/eventloop.go`) that correlates enter/exit pairs, tracks file descriptors, and measures timing. The tool can operate in real-time mode for live monitoring or post-processing mode to generate flamegraphs from previously collected data using the Inferno flamegraph library. Key features include filtering capabilities for specific processes or file patterns, comprehensive statistics collection, and support for various I/O syscalls like open, read, write, close, and dup operations.
+
+=> https://codeberg.org/snonux/ior View on Codeberg
+=> https://github.com/snonux/ior View on GitHub
-Go from `internal/io/signal/signal.go`:
+Go from `internal/file/file.go`:
```AUTO
-func InterruptCh(ctx context.Context) <-chan string {
- sigIntCh := make(chan os.Signal, 10)
- gosignal.Notify(sigIntCh, os.Interrupt)
- sigOtherCh := make(chan os.Signal, 10)
- gosignal.Notify(sigOtherCh, syscall.SIGHUP, syscall.SIGTERM, syscall.SIGQUIT)
- statsCh := make(chan string)
-
- go func() {
- for {
- select {
- case <-sigIntCh:
- select {
- case statsCh <- "Hint: Hit Ctrl+C again to exit":
- select {
- case <-sigIntCh:
- os.Exit(0)
- case <-time.After(time.Second * time.Duration(config.InterruptTimeoutS)):
- }
- default:
- }
- case <-sigOtherCh:
- os.Exit(0)
- case <-ctx.Done():
- return
- }
- }
- }()
- return statsCh
+func NewFd(fd int32, name []byte, flags int32) FdFile {
+ f := FdFile{
+ fd: fd,
+ name: types.StringValue(name),
+ flags: Flags(flags),
+ }
+ if f.flags == -1 {
+ panic(fmt.Sprintf("DEBUG with -1 flags: %v", f))
+ }
+ return f
}
```
@@ -307,7 +269,7 @@ func InterruptCh(ctx context.Context) <-chan string {
* 📈 Lines of Code: 396
* 📄 Lines of Documentation: 24
* 📅 Development Period: 2025-04-18 to 2025-05-11
-* 🔥 Recent Activity: 71.7 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 74.9 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v1.0.0 (2025-05-11)
@@ -332,50 +294,6 @@ def initialize(myself)
---
-### ior
-
-* 💻 Languages: C (54.7%), Go (37.4%), Raku (5.4%), Make (1.4%), C/C++ (1.1%)
-* 📚 Documentation: Text (84.1%), Markdown (15.9%)
-* 📊 Commits: 316
-* 📈 Lines of Code: 9835
-* 📄 Lines of Documentation: 559
-* 📅 Development Period: 2024-01-18 to 2025-06-14
-* 🔥 Recent Activity: 83.7 days (avg. age of last 42 commits)
-* ⚖️ License: No license found
-* 🧪 Status: Experimental (no releases yet)
-
-
-=> showcase/ior/image-1.png ior screenshot
-
-Based on my analysis of the codebase, here's a comprehensive summary of the I/O Riot NG (ior) project:
-
-=> showcase/ior/image-2.svg ior screenshot
-
-**I/O Riot NG** is a Linux-based performance monitoring tool that uses eBPF (extended Berkeley Packet Filter) to trace synchronous I/O system calls and analyze their execution times. This tool is particularly valuable for system performance analysis, allowing developers and system administrators to visualize I/O bottlenecks through detailed flamegraphs. It serves as a modern successor to the original I/O Riot project, migrating from SystemTap/C to a Go/C/BPF implementation for better performance and maintainability.
-
-The architecture combines kernel-level tracing with user-space analysis: eBPF programs (`internal/c/ior.bpf.c`) attach to kernel tracepoints to capture syscall entry/exit events, which are then processed by a Go-based event loop (`internal/eventloop.go`) that correlates enter/exit pairs, tracks file descriptors, and measures timing. The tool can operate in real-time mode for live monitoring or post-processing mode to generate flamegraphs from previously collected data using the Inferno flamegraph library. Key features include filtering capabilities for specific processes or file patterns, comprehensive statistics collection, and support for various I/O syscalls like open, read, write, close, and dup operations.
-
-=> https://codeberg.org/snonux/ior View on Codeberg
-=> https://github.com/snonux/ior View on GitHub
-
-C from `tools/forktest.c`:
-
-```AUTO
-int main() {
- int fd = open("testfile", O_WRONLY| O_CREAT, 0644);
- if (fd < 0) {
- perror("open");
- return 1;
- }
- int flags = fcntl(fd, F_GETFL);
- printf("Parent: File access mode is O_RDWR|O_CREAT (%d %d %d)\n", flags,
- O_RDWR|O_CREAT, O_WRONLY|O_CREAT);
-
- pid_t pid = fork();
-```
-
----
-
### ds-sim
* 💻 Languages: Java (98.9%), Shell (0.6%), CSS (0.5%)
@@ -384,7 +302,7 @@ int main() {
* 📈 Lines of Code: 25762
* 📄 Lines of Documentation: 3101
* 📅 Development Period: 2008-05-15 to 2025-06-27
-* 🔥 Recent Activity: 85.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 88.3 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
* 🤖 AI-Assisted: This project was partially created with the help of generative AI
@@ -399,28 +317,16 @@ The project is built on an event-driven architecture with clear component separa
=> https://codeberg.org/snonux/ds-sim View on Codeberg
=> https://github.com/snonux/ds-sim View on GitHub
-Java from `src/main/java/simulator/VSCreateTask.java`:
+Java from `src/main/java/protocols/implementations/VSPingPongProtocol.java`:
```AUTO
-private String eventClassname;
-
-private String menuText;
-
-private String protocolClassname;
-
-private String shortname;
-
-private boolean isProtocolActivation;
-
-private boolean isProtocolDeactivation;
+private int clientCounter;
-private boolean isClientProtocol;
+private int serverCounter;
-private boolean isRequest;
-
-public VSCreateTask(String menuText, String eventClassname) {
- this.menuText = menuText;
- this.eventClassname = eventClassname;
+public VSPingPongProtocol() {
+ super(VSAbstractProtocol.HAS_ON_CLIENT_START);
+ setClassname(getClass().toString());
}
```
@@ -434,7 +340,7 @@ public VSCreateTask(String menuText, String eventClassname) {
* 📈 Lines of Code: 33
* 📄 Lines of Documentation: 3
* 📅 Development Period: 2025-04-03 to 2025-04-03
-* 🔥 Recent Activity: 97.6 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 100.9 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -464,7 +370,7 @@ func main() {
* 📈 Lines of Code: 3967
* 📄 Lines of Documentation: 411
* 📅 Development Period: 2024-05-04 to 2025-06-12
-* 🔥 Recent Activity: 114.5 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 117.8 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v1.0.0 (2025-03-04)
* 🤖 AI-Assisted: This project was partially created with the help of generative AI
@@ -481,16 +387,29 @@ The tool is architected around a file-based queueing system where posts progress
=> https://codeberg.org/snonux/gos View on Codeberg
=> https://github.com/snonux/gos View on GitHub
-Go from `internal/platforms/linkedin/linkedin.go`:
+Go from `internal/config/args.go`:
```AUTO
-func postImageToLinkedInAPI(ctx context.Context, personURN, accessToken,
- imagePath string) (string, error) {
- uploadURL, imageURN, err := initializeImageUpload(ctx, personURN, accessToken)
- if err != nil {
- return imageURN, err
+func (a *Args) ParsePlatforms(platformStrs string) error {
+ a.Platforms = make(map[string]int)
+
+ for _, platformInfo := range strings.Split(platformStrs, ",") {
+ parts := strings.Split(platformInfo, ":")
+ platformStr := parts[0]
+
+ if len(parts) > 1 {
+ var err error
+ a.Platforms[platformStr], err = strconv.Atoi(parts[1])
+ if err != nil {
+ return err
+ }
+ } else {
+ colour.Infoln("No message length specified for", platformStr, "so assuming
+ 500")
+ a.Platforms[platformStr] = 500
+ }
}
- return imageURN, performImageUpload(ctx, imagePath, uploadURL, accessToken)
+ return nil
}
```
@@ -500,13 +419,13 @@ func postImageToLinkedInAPI(ctx context.Context, personURN, accessToken,
* 💻 Languages: Perl (100.0%)
* 📚 Documentation: Markdown (85.1%), Text (14.9%)
-* 📊 Commits: 68
-* 📈 Lines of Code: 1556
+* 📊 Commits: 70
+* 📈 Lines of Code: 1586
* 📄 Lines of Documentation: 154
-* 📅 Development Period: 2023-01-02 to 2025-07-09
-* 🔥 Recent Activity: 128.9 days (avg. age of last 42 commits)
+* 📅 Development Period: 2023-01-02 to 2025-07-12
+* 🔥 Recent Activity: 121.8 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
-* 🧪 Status: Experimental (no releases yet)
+* 🏷️ Latest Release: v0.1.0 (2025-07-12)
Based on the README and project structure, **foostats** is a privacy-respecting web analytics tool written in Perl specifically designed for OpenBSD systems. It processes both traditional HTTP/HTTPS logs and Gemini protocol logs to generate comprehensive traffic statistics while maintaining visitor privacy through SHA3-512 IP hashing. The tool is built for the foo.zone ecosystem and similar sites that need analytics without compromising user privacy.
@@ -541,7 +460,7 @@ sub write ( $path, $content ) {
* 📈 Lines of Code: 1373
* 📄 Lines of Documentation: 48
* 📅 Development Period: 2024-12-05 to 2025-02-28
-* 🔥 Recent Activity: 138.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 141.6 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -553,16 +472,17 @@ The system is implemented with a modular architecture centered around a DSL clas
=> https://codeberg.org/snonux/rcm View on Codeberg
=> https://github.com/snonux/rcm View on GitHub
-Ruby from `lib/dslkeywords/package.rb`:
+Ruby from `lib/dslkeywords/given.rb`:
```AUTO
-def package(name, &block)
- return unless @conds_met
+def respond_to_missing? = true
+
+def met?
+ return false if @conds.key?(:hostname) && Socket.gethostname !=
+ @conds[:hostname].to_s
- f = Package.new(name)
- f.packages(f.instance_eval(&block))
- self << f
- f
+ true
+end
```
---
@@ -575,7 +495,7 @@ def package(name, &block)
* 📈 Lines of Code: 2268
* 📄 Lines of Documentation: 1180
* 📅 Development Period: 2021-05-21 to 2025-07-09
-* 🔥 Recent Activity: 200.7 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 204.0 days (avg. age of last 42 commits)
* ⚖️ License: GPL-3.0
* 🏷️ Latest Release: 3.0.0 (2024-10-01)
@@ -611,7 +531,7 @@ while read -r src; do
* 📈 Lines of Code: 917
* 📄 Lines of Documentation: 33
* 📅 Development Period: 2024-01-20 to 2025-07-06
-* 🔥 Recent Activity: 448.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 451.6 days (avg. age of last 42 commits)
* ⚖️ License: MIT
* 🏷️ Latest Release: v0.0.3 (2025-07-06)
@@ -682,7 +602,7 @@ func createPreferenceWindow(a fyne.App) fyne.Window {
* 📈 Lines of Code: 12
* 📄 Lines of Documentation: 3
* 📅 Development Period: 2024-03-24 to 2024-03-24
-* 🔥 Recent Activity: 472.1 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 475.4 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -719,7 +639,7 @@ aws: build
* 📈 Lines of Code: 2850
* 📄 Lines of Documentation: 52
* 📅 Development Period: 2023-08-27 to 2025-04-05
-* 🔥 Recent Activity: 502.1 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 505.4 days (avg. age of last 42 commits)
* ⚖️ License: MIT
* 🧪 Status: Experimental (no releases yet)
@@ -731,17 +651,72 @@ The system is designed to host multiple personal services including Anki sync se
=> https://codeberg.org/snonux/terraform View on Codeberg
=> https://github.com/snonux/terraform View on GitHub
-HCL from `s3-org-buetow-tfstate/main.tf`:
+HCL from `org-buetow-base/ecr.tf`:
```AUTO
-terraform {
- backend "s3" {
- bucket = "org-buetow-tfstate"
- key = "s3-org-buetow-tfstate/terraform.tfstate"
- region = "eu-central-1"
- encrypt = true
+resource "aws_ecr_repository" "radicale-read" {
+ name = "radicale"
+
+ tags = {
+ Name = "radicale"
}
}
+
+resource "aws_iam_policy" "ecr_radicale_read" {
+```
+
+---
+
+### gogios
+
+* 💻 Languages: Go (94.4%), YAML (3.4%), JSON (2.2%)
+* 📚 Documentation: Markdown (100.0%)
+* 📊 Commits: 77
+* 📈 Lines of Code: 1096
+* 📄 Lines of Documentation: 287
+* 📅 Development Period: 2023-04-17 to 2025-06-12
+* 🔥 Recent Activity: 518.3 days (avg. age of last 42 commits)
+* ⚖️ License: Custom License
+* 🏷️ Latest Release: v1.1.0 (2024-05-03)
+* 🤖 AI-Assisted: This project was partially created with the help of generative AI
+
+
+=> showcase/gogios/image-1.png gogios screenshot
+
+Gogios is a lightweight, minimalistic monitoring tool written in Go designed for small-scale server monitoring. It executes standard Nagios-compatible check plugins and sends email notifications only when service states change, making it ideal for personal infrastructure or small environments with limited resources. The tool emphasizes simplicity over complexity, avoiding the bloat of enterprise monitoring solutions like Nagios, Icinga, or Prometheus by eliminating features like web UIs, databases, contact groups, and clustering.
+
+The implementation follows a clean architecture with concurrent check execution, dependency management, and persistent state tracking. Key features include state-based notifications (only alerts on status changes), configurable retry logic, federation support for distributed monitoring, and stale detection for checks that haven't run recently. The tool is configured via JSON and requires only a local mail transfer agent for notifications. It's designed to run via cron jobs and supports high-availability setups through simple dual-server configurations, making it perfect for users who want effective monitoring without operational overhead.
+
+=> https://codeberg.org/snonux/gogios View on Codeberg
+=> https://github.com/snonux/gogios View on GitHub
+
+Go from `internal/check.go`:
+
+```AUTO
+func (c check) run(ctx context.Context, name string) checkResult {
+ cmd := exec.CommandContext(ctx, c.Plugin, c.Args...)
+
+ var bytes bytes.Buffer
+ cmd.Stdout = &bytes
+ cmd.Stderr = &bytes
+
+ if err := cmd.Run(); err != nil {
+ if ctx.Err() == context.DeadlineExceeded {
+ return checkResult{name, "Check command timed out", time.Now().Unix(),
+ nagiosCritical, false}
+ }
+ }
+
+ parts := strings.Split(bytes.String(), "|")
+ output := strings.TrimSpace(parts[0])
+
+ ec := cmd.ProcessState.ExitCode()
+ if ec < int(nagiosOk) || ec > int(nagiosUnknown) {
+ ec = int(nagiosUnknown)
+ }
+
+ return checkResult{name, output, time.Now().Unix(), nagiosCode(ec), false}
+}
```
---
@@ -754,7 +729,7 @@ terraform {
* 📈 Lines of Code: 32
* 📄 Lines of Documentation: 3
* 📅 Development Period: 2023-12-31 to 2023-12-31
-* 🔥 Recent Activity: 555.7 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 558.9 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -791,7 +766,7 @@ run: build
* 📈 Lines of Code: 29
* 📄 Lines of Documentation: 3
* 📅 Development Period: 2023-08-13 to 2024-01-01
-* 🔥 Recent Activity: 648.9 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 652.2 days (avg. age of last 42 commits)
* ⚖️ License: MIT
* 🧪 Status: Experimental (no releases yet)
@@ -829,7 +804,7 @@ aws:
* 📈 Lines of Code: 1525
* 📄 Lines of Documentation: 15
* 📅 Development Period: 2023-04-17 to 2023-11-19
-* 🔥 Recent Activity: 701.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 704.3 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -841,15 +816,16 @@ The architecture consists of several key components: a quorum manager that handl
=> https://codeberg.org/snonux/gorum View on Codeberg
=> https://github.com/snonux/gorum View on GitHub
-Go from `internal/utils/string.go`:
+Go from `internal/vote/vote.go`:
```AUTO
- "strings"
-)
+func New(conf config.Config, ids ...string) (Vote, error) {
+ var v Vote
-func StripPort(addr string) string {
- parts := strings.Split(addr, ":")
- return parts[0]
+ v.FromID = conf.MyID
+ v.IDs = ids
+
+ return v, nil
}
```
@@ -863,7 +839,7 @@ func StripPort(addr string) string {
* 📈 Lines of Code: 312
* 📄 Lines of Documentation: 416
* 📅 Development Period: 2013-03-22 to 2025-05-18
-* 🔥 Recent Activity: 751.1 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 754.3 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: v1.0.0 (2023-04-29)
@@ -898,75 +874,6 @@ method output-trim(Str \str, UInt \line-limit --> Str) {
---
-### gogios
-
-* 💻 Languages: Go (90.8%), YAML (5.6%), JSON (3.6%)
-* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 77
-* 📈 Lines of Code: 662
-* 📄 Lines of Documentation: 195
-* 📅 Development Period: 2023-04-17 to 2024-05-03
-* 🔥 Recent Activity: 762.0 days (avg. age of last 42 commits)
-* ⚖️ License: Custom License
-* 🏷️ Latest Release: v1.1.0 (2024-05-03)
-
-⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
-
-=> showcase/gogios/image-1.png gogios screenshot
-
-Gogios is a lightweight, minimalistic monitoring tool written in Go designed for small-scale server monitoring. It executes standard Nagios-compatible check plugins and sends email notifications only when service states change, making it ideal for personal infrastructure or small environments with limited resources. The tool emphasizes simplicity over complexity, avoiding the bloat of enterprise monitoring solutions like Nagios, Icinga, or Prometheus by eliminating features like web UIs, databases, contact groups, and clustering.
-
-The implementation follows a clean architecture with concurrent check execution, dependency management, and persistent state tracking. Key features include state-based notifications (only alerts on status changes), configurable retry logic, federation support for distributed monitoring, and stale detection for checks that haven't run recently. The tool is configured via JSON and requires only a local mail transfer agent for notifications. It's designed to run via cron jobs and supports high-availability setups through simple dual-server configurations, making it perfect for users who want effective monitoring without operational overhead.
-
-=> https://codeberg.org/snonux/gogios View on Codeberg
-=> https://github.com/snonux/gogios View on GitHub
-
-Go from `internal/state.go`:
-
-```AUTO
-func readState(conf config) (state, error) {
- s := state{
- stateFile: fmt.Sprintf("%s/state.json", conf.StateDir),
- checks: make(map[string]checkState),
- }
-
- if _, err := os.Stat(s.stateFile); err != nil {
- return s, nil
- }
-
- file, err := os.Open(s.stateFile)
- if err != nil {
- return s, err
- }
- defer file.Close()
-
- bytes, err := io.ReadAll(file)
- if err != nil {
- return s, err
- }
-
- if err := json.Unmarshal(bytes, &s.checks); err != nil {
- return s, err
- }
-
- var obsolete []string
- for name := range s.checks {
- if _, ok := conf.Checks[name]; !ok {
- obsolete = append(obsolete, name)
- }
- }
-
- for _, name := range obsolete {
- delete(s.checks, name)
- log.Printf("State of %s is obsolete (removed)", name)
- }
-
- return s, nil
-}
-```
-
----
-
### randomjournalpage
* 💻 Languages: Shell (94.1%), Make (5.9%)
@@ -975,7 +882,7 @@ func readState(conf config) (state, error) {
* 📈 Lines of Code: 51
* 📄 Lines of Documentation: 26
* 📅 Development Period: 2022-06-02 to 2024-04-20
-* 🔥 Recent Activity: 765.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 769.1 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1000,6 +907,62 @@ declare -i NUM_PAGES_TO_EXTRACT=42 # This is the answear!
---
+### dtail
+
+* 💻 Languages: Go (91.1%), JSON (4.1%), C (2.9%), Make (0.6%), C/C++ (0.5%), Config (0.3%), Shell (0.2%), Docker (0.2%)
+* 📚 Documentation: Text (80.4%), Markdown (19.6%)
+* 📊 Commits: 1049
+* 📈 Lines of Code: 13525
+* 📄 Lines of Documentation: 5375
+* 📅 Development Period: 2020-01-09 to 2023-10-05
+* 🔥 Recent Activity: 781.8 days (avg. age of last 42 commits)
+* ⚖️ License: Apache-2.0
+* 🏷️ Latest Release: v4.2.0 (2023-06-21)
+
+⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
+
+=> showcase/dtail/image-1.png dtail screenshot
+
+DTail is a distributed log processing system written in Go that allows DevOps engineers to tail, cat, and grep log files across thousands of servers concurrently. It provides secure access through SSH authentication and respects UNIX file system permissions, making it ideal for enterprise environments where log analysis needs to scale horizontally across large server fleets. The tool supports advanced features like compressed file handling (gzip/zstd) and distributed MapReduce aggregations for complex log analytics.
+
+=> showcase/dtail/image-2.gif dtail screenshot
+
+The system uses a client-server architecture where dtail servers run on target machines (listening on port 2222) and clients connect to multiple servers simultaneously. It can also operate in serverless mode for local operations. The implementation leverages SSH for secure communication, includes sophisticated connection throttling and resource management, and provides specialized tools (dcat, dgrep, dmap) for different log processing tasks. The MapReduce functionality supports SQL-like queries with server-side local aggregation and client-side final aggregation, enabling powerful distributed analytics across log data.
+
+=> https://codeberg.org/snonux/dtail View on Codeberg
+=> https://github.com/snonux/dtail View on GitHub
+
+Go from `internal/io/fs/readfilelcontext.go`:
+
+```AUTO
+func (f *readFile) lContextNotMatched(ctx context.Context, ls *ltxState,
+ lines chan<- *line.Line, rawLine *bytes.Buffer) readStatus {
+
+ if ls.processAfter && ls.after > 0 {
+ ls.after--
+ myLine := line.New(rawLine, f.totalLineCount(), 100, f.globID)
+
+ select {
+ case lines <- myLine:
+ case <-ctx.Done():
+ return abortReading
+ }
+
+ } else if ls.processBefore {
+ select {
+ case ls.beforeBuf <- rawLine:
+ default:
+ pool.RecycleBytesBuffer(<-ls.beforeBuf)
+ ls.beforeBuf <- rawLine
+ }
+ }
+
+ return continueReading
+}
+```
+
+---
+
### sway-autorotate
* 💻 Languages: Shell (100.0%)
@@ -1008,7 +971,7 @@ declare -i NUM_PAGES_TO_EXTRACT=42 # This is the answear!
* 📈 Lines of Code: 41
* 📄 Lines of Documentation: 17
* 📅 Development Period: 2020-01-30 to 2025-04-30
-* 🔥 Recent Activity: 1059.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1062.6 days (avg. age of last 42 commits)
* ⚖️ License: GPL-3.0
* 🧪 Status: Experimental (no releases yet)
@@ -1042,7 +1005,7 @@ declare -r SCREEN=eDP-1
* 📈 Lines of Code: 342
* 📄 Lines of Documentation: 39
* 📅 Development Period: 2011-11-19 to 2022-04-02
-* 🔥 Recent Activity: 1278.9 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1282.2 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.5.0 (2022-02-21)
@@ -1084,7 +1047,7 @@ scalephotos () {
* 📈 Lines of Code: 1728
* 📄 Lines of Documentation: 18
* 📅 Development Period: 2020-07-12 to 2023-04-09
-* 🔥 Recent Activity: 1430.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1433.3 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -1097,21 +1060,16 @@ The project leverages Go's generics system to provide type-safe implementations
=> https://codeberg.org/snonux/algorithms View on Codeberg
=> https://github.com/snonux/algorithms View on GitHub
-Go from `queue/elementarypriority.go`:
+Go from `queue/priority.go`:
```AUTO
-func (q *ElementaryPriority[T]) DeleteMax() T {
- if q.Empty() {
- return 0
- }
-
- ind, max := q.max()
- for i := ind + 1; i < q.Size(); i++ {
- q.a[i-1] = q.a[i]
- }
- q.a = q.a[0 : len(q.a)-1]
-
- return max
+type PriorityQueue interface {
+ Insert(a int)
+ Max() (max int)
+ DeleteMax() int
+ Empty() bool
+ Size() int
+ Clear()
}
```
@@ -1125,7 +1083,7 @@ func (q *ElementaryPriority[T]) DeleteMax() T {
* 📈 Lines of Code: 671
* 📄 Lines of Documentation: 19
* 📅 Development Period: 2018-05-26 to 2025-01-21
-* 🔥 Recent Activity: 1431.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1435.1 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1154,11 +1112,11 @@ def out(message, prefix, flag = :none)
### foo.zone
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 2905
+* 📊 Commits: 2911
* 📈 Lines of Code: 0
* 📄 Lines of Documentation: 23
* 📅 Development Period: 2021-05-21 to 2022-04-02
-* 🔥 Recent Activity: 1445.6 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1448.9 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1181,7 +1139,7 @@ The site is built using **Gemtexter**, a static site generator that creates both
* 📈 Lines of Code: 51
* 📄 Lines of Documentation: 69
* 📅 Development Period: 2014-03-24 to 2022-04-23
-* 🔥 Recent Activity: 1911.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1914.3 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1215,7 +1173,7 @@ sub hello() {
* 📈 Lines of Code: 12420
* 📄 Lines of Documentation: 610
* 📅 Development Period: 2018-03-01 to 2020-01-22
-* 🔥 Recent Activity: 2452.5 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 2455.8 days (avg. age of last 42 commits)
* ⚖️ License: Apache-2.0
* 🏷️ Latest Release: 0.5.1 (2019-01-04)
@@ -1240,7 +1198,7 @@ The tool is implemented in C for minimal overhead and uses SystemTap for efficie
* 📈 Lines of Code: 919
* 📄 Lines of Documentation: 12
* 📅 Development Period: 2015-01-02 to 2021-11-04
-* 🔥 Recent Activity: 2961.2 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 2964.5 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 1.1.3 (2015-01-02)
@@ -1253,7 +1211,7 @@ The system is particularly useful for distributed static content delivery where
=> https://codeberg.org/snonux/staticfarm-apache-handlers View on Codeberg
=> https://github.com/snonux/staticfarm-apache-handlers View on GitHub
-Perl from `src/StaticFarm/API.pm`:
+Perl from `debian/staticfarm-apache-handlers/usr/share/staticfarm/apache/handlers/StaticFarm/API.pm`:
```AUTO
sub handler {
@@ -1311,7 +1269,7 @@ sub handler {
* 📈 Lines of Code: 18
* 📄 Lines of Documentation: 49
* 📅 Development Period: 2014-03-24 to 2021-11-05
-* 🔥 Recent Activity: 3197.1 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3200.4 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1334,7 +1292,7 @@ The implementation consists of a shell script (`update-dyndns`) that accepts hos
* 📈 Lines of Code: 5360
* 📄 Lines of Documentation: 789
* 📅 Development Period: 2015-01-02 to 2021-11-05
-* 🔥 Recent Activity: 3463.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3467.1 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 1.0.1 (2015-01-02)
@@ -1347,7 +1305,7 @@ The tool is particularly useful for system administrators and DevOps engineers w
=> https://codeberg.org/snonux/mon View on Codeberg
=> https://github.com/snonux/mon View on GitHub
-Perl from `debian/mon/usr/share/mon/lib/MAPI/RESTlos.pm`:
+Perl from `lib/MON/Cache.pm`:
```AUTO
sub new {
@@ -1371,7 +1329,7 @@ sub new {
* 📈 Lines of Code: 273
* 📄 Lines of Documentation: 32
* 📅 Development Period: 2015-09-29 to 2021-11-05
-* 🔥 Recent Activity: 3468.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3471.2 days (avg. age of last 42 commits)
* ⚖️ License: Apache-2.0
* 🏷️ Latest Release: 0 (2015-10-26)
@@ -1407,7 +1365,7 @@ def initialize
* 📈 Lines of Code: 1839
* 📄 Lines of Documentation: 412
* 📅 Development Period: 2015-01-02 to 2021-11-05
-* 🔥 Recent Activity: 3547.6 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3550.9 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 1.0.2 (2015-01-02)
@@ -1420,34 +1378,23 @@ The project is implemented as a modular Perl application with a clean architectu
=> https://codeberg.org/snonux/pingdomfetch View on Codeberg
=> https://github.com/snonux/pingdomfetch View on GitHub
-Perl from `lib/PINGDOMFETCH/Pingdom.pm`:
+Perl from `lib/PINGDOMFETCH/Pingdomfetch.pm`:
```AUTO
sub new {
- my ( $class, $config ) = @_;
-
- my $app_key = $config->get('pingdom.api.app.key');
- my $host = $config->get('pingdom.api.host');
- my $port = $config->get('pingdom.api.port');
- my $protocol = $config->get('pingdom.api.protocol');
-
- my $json = JSON->new()->allow_nonref();
+ my ( $class, $opts ) = @_;
-
- my $headers = {
- 'App-key' => $app_key,
- 'User-Agent' => 'pingdomfetch',
- };
-
- my $url_base = "$protocol://$host:$port";
+ my $config = PINGDOMFETCH::Config->new($opts);
+ my $pingdom = PINGDOMFETCH::Pingdom->new($config);
my $self = bless {
- config => $config,
- json => $json,
- url_base => $url_base,
- headers => $headers,
+ config => $config,
+ pingdom => $pingdom,
+ dots_counter => 0,
}, $class;
+ $self->init_from_to_interval();
+
return $self;
}
```
@@ -1462,7 +1409,7 @@ sub new {
* 📈 Lines of Code: 499
* 📄 Lines of Documentation: 8
* 📅 Development Period: 2015-05-24 to 2021-11-03
-* 🔥 Recent Activity: 3558.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3561.6 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.1 (2015-06-01)
@@ -1475,23 +1422,40 @@ The implementation follows a concurrent architecture using Go's goroutines and c
=> https://codeberg.org/snonux/gotop View on Codeberg
=> https://github.com/snonux/gotop View on GitHub
-Go from `utils/utils.go`:
+Go from `process/process.go`:
```AUTO
-func Slurp(what *string, path string) error {
- bytes, err := ioutil.ReadFile(path)
+func new(pidstr string) (Process, error) {
+ pid, err := strconv.Atoi(pidstr)
if err != nil {
- return err
+ return Process{}, err
+ }
+
+ timestamp := int32(time.Now().Unix())
+ p := Process{Pid: pid, Timestamp: timestamp}
+ var rawIo string
+
+ if err = utils.Slurp(&rawIo, fmt.Sprintf("/proc/%d/io", pid)); err != nil {
+ return p, err
+ }
+
+ if err = p.parseRawIo(rawIo); err != nil {
+ return p, err
+ }
+
+ if err = utils.Slurp(&p.Comm, fmt.Sprintf("/proc/%d/comm", pid)); err != nil {
+ return p, err
+ }
+
+ err = utils.Slurp(&p.Cmdline, fmt.Sprintf("/proc/%d/cmdline", pid))
+
+ if p.Cmdline == "" {
+ p.Id = fmt.Sprintf("(%s) %s", pidstr, p.Comm)
} else {
- for _, byte := range bytes {
- if byte == 0 {
- *what += " "
- } else {
- *what += string(byte)
- }
- }
+ p.Id = fmt.Sprintf("(%s) %s", pidstr, p.Cmdline)
}
- return nil
+
+ return p, err
}
```
@@ -1503,7 +1467,7 @@ func Slurp(what *string, path string) error {
* 📊 Commits: 670
* 📈 Lines of Code: 1675
* 📅 Development Period: 2011-03-06 to 2018-12-22
-* 🔥 Recent Activity: 3614.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3617.2 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v1.0.0 (2018-12-22)
@@ -1518,22 +1482,18 @@ The system works through a template-driven architecture where content is written
=> https://codeberg.org/snonux/xerl View on Codeberg
=> https://github.com/snonux/xerl View on GitHub
-Perl from `Xerl/XML/Element.pm`:
+Perl from `Xerl/XML/Reader.pm`:
```AUTO
-sub starttag {
- my $self = $_[0];
- my ( $name, $temp ) = ( $_[1], undef );
-
- return $self if $self->get_name() eq $name;
- return undef if ref $self->get_array() ne 'ARRAY';
+sub open {
+ my $self = shift;
- for ( @{ $self->get_array() } ) {
- $temp = $_->starttag($name);
- return $temp if defined $temp;
+ if ( -f $self->get_path() ) {
+ return 0;
+ }
+ else {
+ return 1;
}
-
- return undef;
}
```
@@ -1547,7 +1507,7 @@ sub starttag {
* 📈 Lines of Code: 88
* 📄 Lines of Documentation: 148
* 📅 Development Period: 2015-06-18 to 2015-12-05
-* 🔥 Recent Activity: 3662.1 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3665.3 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1562,26 +1522,17 @@ The implementation works by creating a Debian filesystem image using debootstrap
=> https://codeberg.org/snonux/debroid View on Codeberg
=> https://github.com/snonux/debroid View on GitHub
-Shell from `storage/sdcard1/Linux/jessie.sh`:
+Shell from `data/local/userinit.sh`:
```AUTO
-function mount_chroot {
- mountpoint $ROOT
- if [ $? -ne 0 ]; then
- losetup $LOOP_DEVICE $ROOT.img
- busybox mount -t ext4 $LOOP_DEVICE $ROOT
- fi
- for mountpoint in proc dev sys dev/pts; do
- mountpoint $ROOT/$mountpoint
- if [ $? -ne 0 ]; then
- busybox mount --bind /$mountpoint $ROOT/$mountpoint
- fi
- done
- mountpoint $ROOT/storage/sdcard1
- if [ $? -ne 0 ]; then
- busybox mount --bind /storage/sdcard1 $ROOT/storage/sdcard1
+while : ; do
+ if [ -d /storage/sdcard1/Linux/jessie ]; then
+ cd /storage/sdcard1/Linux && /system/bin/sh jessie.sh start_services
+ /system/bin/date
+ exit 0
fi
-}
+ /system/bin/sleep 1
+done
```
---
@@ -1594,7 +1545,7 @@ function mount_chroot {
* 📈 Lines of Code: 1681
* 📄 Lines of Documentation: 539
* 📅 Development Period: 2014-03-10 to 2021-11-03
-* 🔥 Recent Activity: 3940.1 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3943.3 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 1.0.2 (2014-11-17)
@@ -1634,7 +1585,7 @@ class BIGIP(object):
* 📈 Lines of Code: 65
* 📄 Lines of Documentation: 228
* 📅 Development Period: 2013-03-22 to 2021-11-04
-* 🔥 Recent Activity: 3994.5 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3997.8 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.0.0.0 (2013-03-22)
@@ -1669,7 +1620,7 @@ build:
* 📈 Lines of Code: 136
* 📄 Lines of Documentation: 96
* 📅 Development Period: 2013-03-22 to 2021-11-05
-* 🔥 Recent Activity: 4007.5 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4010.8 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.2.0 (2014-07-05)
@@ -1704,7 +1655,7 @@ build:
* 📈 Lines of Code: 134
* 📄 Lines of Documentation: 106
* 📅 Development Period: 2013-03-22 to 2021-11-05
-* 🔥 Recent Activity: 4015.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4018.2 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.1.5 (2014-06-22)
@@ -1727,7 +1678,7 @@ The tool works by having both hosts run the same command simultaneously - one ac
* 📈 Lines of Code: 493
* 📄 Lines of Documentation: 26
* 📅 Development Period: 2009-09-27 to 2021-11-02
-* 🔥 Recent Activity: 4058.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4061.5 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.9.3 (2014-06-14)
@@ -1767,7 +1718,7 @@ function findbin () {
* 📈 Lines of Code: 286
* 📄 Lines of Documentation: 144
* 📅 Development Period: 2013-03-22 to 2021-11-05
-* 🔥 Recent Activity: 4063.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4066.6 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.4.3 (2014-06-16)
@@ -1790,7 +1741,7 @@ The implementation uses modern Perl with the Moo object system and consists of t
* 📈 Lines of Code: 191
* 📄 Lines of Documentation: 8
* 📅 Development Period: 2014-03-24 to 2014-03-24
-* 🔥 Recent Activity: 4124.6 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4127.8 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1803,18 +1754,18 @@ Each script explores different themes - Christmas celebrations, mathematical stu
=> https://codeberg.org/snonux/perl-poetry View on Codeberg
=> https://github.com/snonux/perl-poetry View on GitHub
-Perl from `math.pl`:
+Perl from `perllove.pl`:
```AUTO
-do { int'egrate'; sub trade; };
-do { exp'onentize' and abs'olutize' };
-study and study and study and study;
-
-foreach $topic ({of, math}) {
-you, m/ay /go, to, limits }
-
-do { not qw/erk / unless $success
-and m/ove /o;$n and study };
+no strict;
+no warnings;
+we: do { print 'love'
+or warn and die 'slow'
+unless not defined true #respect
+} for reverse'd', qw/mind of you/
+and map { 'me' } 'into', undef $mourning;
+__END__
+v2 Copyright (2005, 2006) by Paul C. Buetow, http://paul.buetow.org
```
---
@@ -1825,7 +1776,7 @@ and m/ove /o;$n and study };
* 📊 Commits: 7
* 📈 Lines of Code: 80
* 📅 Development Period: 2011-07-09 to 2015-01-13
-* 🔥 Recent Activity: 4204.6 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4207.9 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -1870,38 +1821,15 @@ if ($ENV{SERVER_NAME} eq 'ipv6.buetow.org') {
---
-### cpuinfo
-
-* 💻 Languages: Shell (53.2%), Make (46.8%)
-* 📚 Documentation: Text (100.0%)
-* 📊 Commits: 28
-* 📈 Lines of Code: 124
-* 📄 Lines of Documentation: 75
-* 📅 Development Period: 2010-11-05 to 2021-11-05
-* 🔥 Recent Activity: 4245.3 days (avg. age of last 42 commits)
-* ⚖️ License: No license found
-* 🏷️ Latest Release: 1.0.2 (2014-06-22)
-
-⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
-
-**cpuinfo** is a small command-line utility that provides a human-readable summary of CPU information on Linux systems. It parses `/proc/cpuinfo` using AWK to extract and display key processor details including the CPU model, cache size, number of physical processors, cores, and whether hyper-threading is enabled. The tool calculates total CPU frequency and bogomips across all cores, making it easier to understand complex multi-core and multi-processor configurations at a glance.
-
-The implementation is remarkably simple - a single shell script that uses GNU AWK to parse the kernel's CPU information and format it into a clear, structured output. It's particularly useful for system administrators and developers who need to quickly understand CPU topology, especially on servers with multiple processors or complex threading configurations where the raw `/proc/cpuinfo` output can be overwhelming.
-
-=> https://codeberg.org/snonux/cpuinfo View on Codeberg
-=> https://github.com/snonux/cpuinfo View on GitHub
-
----
-
### loadbars
* 💻 Languages: Perl (97.4%), Make (2.6%)
-* 📚 Documentation: Text (100.0%)
+* 📚 Documentation: Text (93.5%), Markdown (6.5%)
* 📊 Commits: 527
* 📈 Lines of Code: 1828
-* 📄 Lines of Documentation: 100
+* 📄 Lines of Documentation: 107
* 📅 Development Period: 2010-11-05 to 2015-05-23
-* 🔥 Recent Activity: 4275.4 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4215.4 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.7.5 (2014-06-22)
@@ -1914,27 +1842,49 @@ The application is implemented using a multi-threaded architecture where each mo
=> https://codeberg.org/snonux/loadbars View on Codeberg
=> https://github.com/snonux/loadbars View on GitHub
-Perl from `lib/Loadbars/HelpDispatch.pm`:
+Perl from `lib/Loadbars/Utils.pm`:
```AUTO
-sub create () {
- my $hosts = '';
-
- my $textdesc = <<END;
-For more help please consult the manual page or press the 'h' hotkey during
- program execution and watch this terminal window.
-END
+sub trim (\$) {
+ my $str = shift;
+ $$str =~ s/^[\s\t]+//;
+ $$str =~ s/[\s\t]+$//;
+ return undef;
+}
```
---
+### cpuinfo
+
+* 💻 Languages: Shell (53.2%), Make (46.8%)
+* 📚 Documentation: Text (100.0%)
+* 📊 Commits: 28
+* 📈 Lines of Code: 124
+* 📄 Lines of Documentation: 75
+* 📅 Development Period: 2010-11-05 to 2021-11-05
+* 🔥 Recent Activity: 4248.5 days (avg. age of last 42 commits)
+* ⚖️ License: No license found
+* 🏷️ Latest Release: 1.0.2 (2014-06-22)
+
+⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
+
+**cpuinfo** is a small command-line utility that provides a human-readable summary of CPU information on Linux systems. It parses `/proc/cpuinfo` using AWK to extract and display key processor details including the CPU model, cache size, number of physical processors, cores, and whether hyper-threading is enabled. The tool calculates total CPU frequency and bogomips across all cores, making it easier to understand complex multi-core and multi-processor configurations at a glance.
+
+The implementation is remarkably simple - a single shell script that uses GNU AWK to parse the kernel's CPU information and format it into a clear, structured output. It's particularly useful for system administrators and developers who need to quickly understand CPU topology, especially on servers with multiple processors or complex threading configurations where the raw `/proc/cpuinfo` output can be overwhelming.
+
+=> https://codeberg.org/snonux/cpuinfo View on Codeberg
+=> https://github.com/snonux/cpuinfo View on GitHub
+
+---
+
### perldaemon
* 💻 Languages: Perl (72.3%), Shell (23.8%), Config (3.9%)
* 📊 Commits: 110
* 📈 Lines of Code: 614
* 📅 Development Period: 2011-02-05 to 2022-04-21
-* 🔥 Recent Activity: 4324.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4328.1 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v1.4 (2022-04-29)
@@ -1947,21 +1897,17 @@ The architecture centers around a modular plugin system where custom functionali
=> https://codeberg.org/snonux/perldaemon View on Codeberg
=> https://github.com/snonux/perldaemon View on GitHub
-Perl from `lib/PerlDaemon/RunModules.pm`:
+Perl from `lib/PerlDaemonModules/ExampleModule2.pm`:
```AUTO
-sub new ($$) {
+sub new ($$$) {
my ($class, $conf) = @_;
my $self = bless { conf => $conf }, $class;
+ $self->{counter} = 0;
- my $modulesdir = $conf->{'daemon.modules.dir'};
- my $logger = $conf->{logger};
- my %loadedmodules;
- my %scheduler;
-
- if (-d $modulesdir) {
- $logger->logmsg("Loading modules from $modulesdir");
+ return $self;
+}
```
---
@@ -1974,7 +1920,7 @@ sub new ($$) {
* 📈 Lines of Code: 122
* 📄 Lines of Documentation: 10
* 📅 Development Period: 2011-01-27 to 2014-06-22
-* 🔥 Recent Activity: 4655.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4659.1 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: v0.2 (2011-01-27)
@@ -2019,7 +1965,7 @@ function read_config_values(config_file) {
* 📈 Lines of Code: 720
* 📄 Lines of Documentation: 6
* 📅 Development Period: 2008-06-21 to 2021-11-03
-* 🔥 Recent Activity: 4718.5 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4721.8 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v0.3 (2009-02-08)
@@ -2073,7 +2019,7 @@ public SPrefs(Component parent, HashMap<String,String> options) {
* 📈 Lines of Code: 17380
* 📄 Lines of Documentation: 947
* 📅 Development Period: 2009-02-07 to 2021-05-01
-* 🔥 Recent Activity: 5349.2 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 5352.5 days (avg. age of last 42 commits)
* ⚖️ License: GPL-2.0
* 🏷️ Latest Release: v0.1 (2009-02-08)
@@ -2090,18 +2036,13 @@ The implementation uses a clean separation of concerns with dedicated packages f
=> https://codeberg.org/snonux/netcalendar View on Codeberg
=> https://github.com/snonux/netcalendar View on GitHub
-Java from `sources/client/helper/DateSpinner.java`:
+Java from `sources/client/inputforms/CreateNewEvent.java`:
```AUTO
-private void initComponents() {
- setLayout(new FlowLayout(FlowLayout.LEFT, 4, 4));
-
- spinnerDateModel = new SpinnerDateModel(date, null, null, Calendar.MONTH);
- JSpinner jSpinner = new JSpinner(spinnerDateModel);
- new JSpinner.DateEditor(jSpinner, "MM/yy");
+private final static long serialVersionUID = 1L;
- add(jSpinner);
-}
+private final static String[] labels =
+ { "Description: ", "Category: ", "Place: ", "Yearly: ", "Date: "};
```
---
@@ -2114,7 +2055,7 @@ private void initComponents() {
* 📈 Lines of Code: 67884
* 📄 Lines of Documentation: 127
* 📅 Development Period: 2008-05-15 to 2014-06-30
-* 🔥 Recent Activity: 5369.5 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 5372.7 days (avg. age of last 42 commits)
* ⚖️ License: GPL-2.0
* 🏷️ Latest Release: yhttpd-0.7.2 (2013-04-06)
@@ -2131,46 +2072,13 @@ The architecture is built around several key managers: a socket manager for hand
---
-### vs-sim
-
-* 💻 Languages: Java (98.6%), Shell (0.8%), XML (0.4%)
-* 📚 Documentation: LaTeX (98.4%), Text (1.4%), Markdown (0.2%)
-* 📊 Commits: 411
-* 📈 Lines of Code: 14582
-* 📄 Lines of Documentation: 2903
-* 📅 Development Period: 2008-05-15 to 2022-04-03
-* 🔥 Recent Activity: 5385.5 days (avg. age of last 42 commits)
-* ⚖️ License: Custom License
-* 🏷️ Latest Release: v1.0 (2008-08-24)
-
-⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
-
-=> showcase/vs-sim/image-1.jpg vs-sim screenshot
-
-VS-Sim is an open-source distributed systems simulator written in Java, developed as a diploma thesis at Aachen University of Applied Sciences. It provides a visual environment for simulating and understanding distributed system algorithms including consensus protocols (one-phase/two-phase commit), time synchronization (Berkeley, Lamport, vector clocks), and communication patterns (multicast, broadcast, reliable messaging). The simulator is useful for educational purposes, allowing students and researchers to visualize complex distributed system concepts through interactive simulations.
-
-The implementation features a modular architecture with separate packages for core processes, events, protocols, and visualization. It includes pre-built protocol implementations, a GUI-based simulator with start/pause/reset controls, serialization support for saving simulations, and comprehensive time modeling systems. The codebase demonstrates clean separation of concerns with abstract base classes for extensibility and a plugin-like protocol system for easy addition of new distributed algorithms.
-
-=> https://codeberg.org/snonux/vs-sim View on Codeberg
-=> https://github.com/snonux/vs-sim View on GitHub
-
-Java from `sources/exceptions/VSNegativeNumberException.java`:
-
-```AUTO
-public class VSNegativeNumberException extends Exception {
- private static final long serialVersionUID = 1L;
-}
-```
-
----
-
### hsbot
* 💻 Languages: Haskell (98.5%), Make (1.5%)
* 📊 Commits: 80
* 📈 Lines of Code: 601
* 📅 Development Period: 2009-11-22 to 2011-10-17
-* 🔥 Recent Activity: 5444.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 5448.1 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -2183,32 +2091,32 @@ The implementation uses a clean separation of concerns with modules for IRC conn
=> https://codeberg.org/snonux/hsbot View on Codeberg
=> https://github.com/snonux/hsbot View on GitHub
-Haskell from `HsBot/Plugins/MessageCounter.hs`:
+Haskell from `HsBot/Plugins/PrintMessages.hs`:
```AUTO
-module HsBot.Plugins.MessageCounter (makeMessageCounter) where
+module HsBot.Plugins.PrintMessages (makePrintMessages) where
import HsBot.Plugins.Base
import HsBot.Base.Env
import HsBot.Base.State
-import HsBot.IRC.User
-
-update user = user { userMessages = 1 + userMessages user }
+printMessages :: CallbackFunction
+printMessages str sendMessage env@(Env state _) = do
+ putStrLn $ (currentChannel state) ++ " "
```
---
### fype
-* 💻 Languages: C (71.2%), C/C++ (20.7%), HTML (6.6%), Make (1.5%)
-* 📚 Documentation: Text (60.3%), LaTeX (39.7%)
+* 💻 Languages: C (72.1%), C/C++ (20.7%), HTML (5.7%), Make (1.5%)
+* 📚 Documentation: Text (71.3%), LaTeX (28.7%)
* 📊 Commits: 99
-* 📈 Lines of Code: 8954
-* 📄 Lines of Documentation: 1432
-* 📅 Development Period: 2008-05-15 to 2014-06-30
-* 🔥 Recent Activity: 5831.5 days (avg. age of last 42 commits)
+* 📈 Lines of Code: 10196
+* 📄 Lines of Documentation: 1741
+* 📅 Development Period: 2008-05-15 to 2021-11-03
+* 🔥 Recent Activity: 5609.9 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -2221,16 +2129,24 @@ The implementation is built using a straightforward top-down parser with a maxim
=> https://codeberg.org/snonux/fype View on Codeberg
=> https://github.com/snonux/fype View on GitHub
-C from `src/core/scanner.h`:
+---
-```AUTO
-typedef struct {
- int i_current_line_nr;
- int i_current_pos_nr;
- int i_num_tokenends;
- char *c_filename;
- char *c_codestring;
- FILE *fp;
- List *p_list_token;
- TokenType tt_last;
-```
+### vs-sim
+
+* 📚 Documentation: Markdown (100.0%)
+* 📊 Commits: 411
+* 📈 Lines of Code: 0
+* 📄 Lines of Documentation: 7
+* 📅 Development Period: 2008-05-15 to 2015-05-23
+* 🔥 Recent Activity: 5809.1 days (avg. age of last 42 commits)
+* ⚖️ License: No license found
+* 🏷️ Latest Release: v1.0 (2008-08-24)
+
+⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
+
+VS-Sim is an open-source distributed systems simulator written in Java, developed as a diploma thesis at Aachen University of Applied Sciences. It provides a visual environment for simulating and understanding distributed system algorithms including consensus protocols (one-phase/two-phase commit), time synchronization (Berkeley, Lamport, vector clocks), and communication patterns (multicast, broadcast, reliable messaging). The simulator is useful for educational purposes, allowing students and researchers to visualize complex distributed system concepts through interactive simulations.
+
+The implementation features a modular architecture with separate packages for core processes, events, protocols, and visualization. It includes pre-built protocol implementations, a GUI-based simulator with start/pause/reset controls, serialization support for saving simulations, and comprehensive time modeling systems. The codebase demonstrates clean separation of concerns with abstract base classes for extensibility and a plugin-like protocol system for easy addition of new distributed algorithms.
+
+=> https://codeberg.org/snonux/vs-sim View on Codeberg
+=> https://github.com/snonux/vs-sim View on GitHub
diff --git a/about/showcase.gmi.tpl b/about/showcase.gmi.tpl
index 7e1bb251..fcbfa0bc 100644
--- a/about/showcase.gmi.tpl
+++ b/about/showcase.gmi.tpl
@@ -9,28 +9,29 @@ This page showcases my side projects, providing an overview of what each project
## Overall Statistics
* 📦 Total Projects: 55
-* 📊 Total Commits: 10,425
-* 📈 Total Lines of Code: 156,358
-* 📄 Total Lines of Documentation: 21,300
-* 💻 Languages: Go (30.3%), Java (25.9%), Perl (9.9%), C (9.0%), C/C++ (5.1%), Shell (4.1%), C++ (3.3%), HTML (1.9%), Config (1.9%), Ruby (1.8%), HCL (1.8%), Python (1.0%), Make (0.9%), Raku (0.8%), JSON (0.5%), CSS (0.5%), XML (0.4%), Haskell (0.4%), YAML (0.3%), TOML (0.2%)
-* 📚 Documentation: Text (51.2%), Markdown (46.1%), LaTeX (2.7%)
-* 🤖 AI-Assisted Projects: 8 out of 55 (14.5% AI-assisted, 85.5% human-only)
-* 🚀 Release Status: 32 released, 23 experimental (58.2% with releases, 41.8% experimental)
+* 📊 Total Commits: 10,446
+* 📈 Total Lines of Code: 211,600
+* 📄 Total Lines of Documentation: 21,802
+* 💻 Languages: Go (20.2%), Java (19.1%), C++ (17.6%), C/C++ (9.9%), Perl (8.1%), C (7.1%), Shell (6.9%), Config (2.2%), HTML (2.1%), Ruby (1.3%), HCL (1.3%), Make (0.9%), Python (0.8%), CSS (0.7%), Raku (0.6%), JSON (0.4%), XML (0.3%), Haskell (0.3%), YAML (0.2%), TOML (0.1%)
+* 📚 Documentation: Text (52.5%), Markdown (45.2%), LaTeX (2.3%)
+* 🎵 Vibe-Coded Projects: 2 out of 55 (3.6%)
+* 🤖 AI-Assisted Projects (including vibe-coded): 7 out of 55 (12.7% AI-assisted, 87.3% human-only)
+* 🚀 Release Status: 33 released, 22 experimental (60.0% with releases, 40.0% experimental)
## Projects
### gitsyncer
-* 💻 Languages: Go (86.7%), Shell (11.4%), YAML (1.4%), JSON (0.5%)
+* 💻 Languages: Go (89.5%), Shell (8.9%), YAML (1.1%), JSON (0.4%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 60
-* 📈 Lines of Code: 6548
-* 📄 Lines of Documentation: 2338
-* 📅 Development Period: 2025-06-23 to 2025-07-09
-* 🔥 Recent Activity: 6.1 days (avg. age of last 42 commits)
+* 📊 Commits: 76
+* 📈 Lines of Code: 8340
+* 📄 Lines of Documentation: 2363
+* 📅 Development Period: 2025-06-23 to 2025-07-12
+* 🔥 Recent Activity: 2.5 days (avg. age of last 42 commits)
* ⚖️ License: BSD-2-Clause
* 🏷️ Latest Release: v0.5.0 (2025-07-09)
-* 🤖 AI-Assisted: This project was partially created with the help of generative AI
+* 🎵 Vibe-Coded: This project has been vibe coded
GitSyncer is a cross-platform repository synchronization tool that automatically keeps Git repositories in sync across multiple hosting platforms like GitHub, Codeberg, and private SSH servers. It solves the common problem of maintaining consistent code across different Git hosting services by cloning repositories, adding all configured platforms as remotes, and continuously merging and pushing changes bidirectionally while handling branch creation and conflict detection.
@@ -40,23 +41,14 @@ The tool is implemented in Go with a clean architecture that supports both indiv
=> https://codeberg.org/snonux/gitsyncer View on Codeberg
=> https://github.com/snonux/gitsyncer View on GitHub
-Go from `internal/sync/branch_filter.go`:
+Go from `internal/showcase/images.go`:
```AUTO
-func NewBranchFilter(excludePatterns []string) (*BranchFilter, error) {
- filter := &BranchFilter{
- excludePatterns: make([]*regexp.Regexp, 0, len(excludePatterns)),
- }
-
- for _, pattern := range excludePatterns {
- re, err := regexp.Compile(pattern)
- if err != nil {
- return nil, fmt.Errorf("invalid regex pattern '%s': %w", pattern, err)
- }
- filter.excludePatterns = append(filter.excludePatterns, re)
- }
-
- return filter, nil
+func isGitHostedImage(url string) bool {
+ return strings.Contains(url, "github.com") ||
+ strings.Contains(url, "githubusercontent.com") ||
+ strings.Contains(url, "codeberg.org") ||
+ strings.Contains(url, "codeberg.page")
}
```
@@ -66,13 +58,13 @@ func NewBranchFilter(excludePatterns []string) (*BranchFilter, error) {
* 💻 Languages: Go (98.3%), YAML (1.7%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 19
+* 📊 Commits: 20
* 📈 Lines of Code: 873
* 📄 Lines of Documentation: 135
-* 📅 Development Period: 2025-06-25 to 2025-06-29
-* 🔥 Recent Activity: 15.6 days (avg. age of last 42 commits)
+* 📅 Development Period: 2025-06-25 to 2025-07-12
+* 🔥 Recent Activity: 15.4 days (avg. age of last 42 commits)
* ⚖️ License: BSD-2-Clause
-* 🧪 Status: Experimental (no releases yet)
+* 🏷️ Latest Release: v0.0.0 (2025-06-29)
* 🤖 AI-Assisted: This project was partially created with the help of generative AI
@@ -99,14 +91,14 @@ func tick() tea.Cmd {
* 💻 Languages: Go (99.8%), YAML (0.2%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 215
+* 📊 Commits: 216
* 📈 Lines of Code: 6160
* 📄 Lines of Documentation: 162
-* 📅 Development Period: 2025-06-19 to 2025-07-08
-* 🔥 Recent Activity: 16.0 days (avg. age of last 42 commits)
+* 📅 Development Period: 2025-06-19 to 2025-07-12
+* 🔥 Recent Activity: 16.1 days (avg. age of last 42 commits)
* ⚖️ License: BSD-2-Clause
* 🏷️ Latest Release: v0.9.2 (2025-07-02)
-* 🤖 AI-Assisted: This project was partially created with the help of generative AI
+* 🎵 Vibe-Coded: This project has been vibe coded
=> showcase/tasksamurai/image-1.png tasksamurai screenshot
@@ -120,28 +112,10 @@ The implementation follows a clean architecture with clear separation of concern
=> https://codeberg.org/snonux/tasksamurai View on Codeberg
=> https://github.com/snonux/tasksamurai View on GitHub
-Go from `internal/task/task.go`:
+Go from `internal/version.go`:
```AUTO
-func SetDebugLog(path string) error {
- if debugFile != nil {
- debugFile.Close()
- debugFile = nil
- debugWriter = nil
- }
-
- if path == "" {
- return nil
- }
-
- f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0o644)
- if err != nil {
- return err
- }
- debugFile = f
- debugWriter = f
- return nil
-}
+const Version = "0.9.2"
```
---
@@ -154,7 +128,7 @@ func SetDebugLog(path string) error {
* 📈 Lines of Code: 4123
* 📄 Lines of Documentation: 854
* 📅 Development Period: 2021-12-28 to 2025-07-12
-* 🔥 Recent Activity: 18.2 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 18.8 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -183,43 +157,6 @@ sub write ( $path, $content ) {
---
-### dtail
-
-* 💻 Languages: Go (93.9%), JSON (2.8%), C (2.0%), Make (0.5%), C/C++ (0.3%), Config (0.2%), Shell (0.2%), Docker (0.1%)
-* 📚 Documentation: Text (79.4%), Markdown (20.6%)
-* 📊 Commits: 1049
-* 📈 Lines of Code: 20091
-* 📄 Lines of Documentation: 5674
-* 📅 Development Period: 2020-01-09 to 2025-06-20
-* 🔥 Recent Activity: 55.1 days (avg. age of last 42 commits)
-* ⚖️ License: Apache-2.0
-* 🏷️ Latest Release: v4.2.0 (2023-06-21)
-* 🤖 AI-Assisted: This project was partially created with the help of generative AI
-
-
-=> showcase/dtail/image-1.png dtail screenshot
-
-DTail is a distributed log processing system written in Go that allows DevOps engineers to tail, cat, and grep log files across thousands of servers concurrently. It provides secure access through SSH authentication and respects UNIX file system permissions, making it ideal for enterprise environments where log analysis needs to scale horizontally across large server fleets. The tool supports advanced features like compressed file handling (gzip/zstd) and distributed MapReduce aggregations for complex log analytics.
-
-=> showcase/dtail/image-2.gif dtail screenshot
-
-The system uses a client-server architecture where dtail servers run on target machines (listening on port 2222) and clients connect to multiple servers simultaneously. It can also operate in serverless mode for local operations. The implementation leverages SSH for secure communication, includes sophisticated connection throttling and resource management, and provides specialized tools (dcat, dgrep, dmap) for different log processing tasks. The MapReduce functionality supports SQL-like queries with server-side local aggregation and client-side final aggregation, enabling powerful distributed analytics across log data.
-
-=> https://codeberg.org/snonux/dtail View on Codeberg
-=> https://github.com/snonux/dtail View on GitHub
-
-Go from `internal/mapr/groupset.go`:
-
-```AUTO
-func NewGroupSet() *GroupSet {
- g := GroupSet{}
- g.InitSet()
- return &g
-}
-```
-
----
-
### ior
* 💻 Languages: Go (81.0%), Raku (11.5%), C (4.4%), Make (1.7%), C/C++ (1.5%)
@@ -228,7 +165,7 @@ func NewGroupSet() *GroupSet {
* 📈 Lines of Code: 7911
* 📄 Lines of Documentation: 742
* 📅 Development Period: 2024-01-18 to 2025-07-12
-* 🔥 Recent Activity: 55.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 56.3 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
* 🤖 AI-Assisted: This project was partially created with the help of generative AI
@@ -273,7 +210,7 @@ func NewFd(fd int32, name []byte, flags int32) FdFile {
* 📈 Lines of Code: 396
* 📄 Lines of Documentation: 24
* 📅 Development Period: 2025-04-18 to 2025-05-11
-* 🔥 Recent Activity: 74.4 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 74.9 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v1.0.0 (2025-05-11)
@@ -306,7 +243,7 @@ def initialize(myself)
* 📈 Lines of Code: 25762
* 📄 Lines of Documentation: 3101
* 📅 Development Period: 2008-05-15 to 2025-06-27
-* 🔥 Recent Activity: 87.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 88.3 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
* 🤖 AI-Assisted: This project was partially created with the help of generative AI
@@ -321,12 +258,16 @@ The project is built on an event-driven architecture with clear component separa
=> https://codeberg.org/snonux/ds-sim View on Codeberg
=> https://github.com/snonux/ds-sim View on GitHub
-Java from `src/main/java/testing/HeadlessLoader.java`:
+Java from `src/main/java/protocols/implementations/VSPingPongProtocol.java`:
```AUTO
-static {
- System.setProperty("java.awt.headless", "true");
- System.setProperty("ds.sim.headless", "true");
+private int clientCounter;
+
+private int serverCounter;
+
+public VSPingPongProtocol() {
+ super(VSAbstractProtocol.HAS_ON_CLIENT_START);
+ setClassname(getClass().toString());
}
```
@@ -340,7 +281,7 @@ static {
* 📈 Lines of Code: 33
* 📄 Lines of Documentation: 3
* 📅 Development Period: 2025-04-03 to 2025-04-03
-* 🔥 Recent Activity: 100.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 100.9 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -370,7 +311,7 @@ func main() {
* 📈 Lines of Code: 3967
* 📄 Lines of Documentation: 411
* 📅 Development Period: 2024-05-04 to 2025-06-12
-* 🔥 Recent Activity: 117.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 117.8 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v1.0.0 (2025-03-04)
* 🤖 AI-Assisted: This project was partially created with the help of generative AI
@@ -387,26 +328,28 @@ The tool is architected around a file-based queueing system where posts progress
=> https://codeberg.org/snonux/gos View on Codeberg
=> https://github.com/snonux/gos View on GitHub
-Go from `internal/summary/summary.go`:
+Go from `internal/config/args.go`:
```AUTO
-func Run(ctx context.Context, args config.Args) error {
- entries, err := deduppedEntries(args)
- if err != nil {
- return err
- }
-
- sort.Slice(entries, func(i, j int) bool {
- return entries[i].Time.Before(entries[j].Time)
- })
-
- title := fmt.Sprintf("Posts for %s", strings.Join(args.GeminiSummaryFor, " "))
- gemtext, err := fmt.Print(generateGemtext(args, entries, title))
- if err != nil {
- return err
+func (a *Args) ParsePlatforms(platformStrs string) error {
+ a.Platforms = make(map[string]int)
+
+ for _, platformInfo := range strings.Split(platformStrs, ",") {
+ parts := strings.Split(platformInfo, ":")
+ platformStr := parts[0]
+
+ if len(parts) > 1 {
+ var err error
+ a.Platforms[platformStr], err = strconv.Atoi(parts[1])
+ if err != nil {
+ return err
+ }
+ } else {
+ colour.Infoln("No message length specified for", platformStr, "so assuming
+ 500")
+ a.Platforms[platformStr] = 500
+ }
}
- fmt.Println(gemtext)
-
return nil
}
```
@@ -421,7 +364,7 @@ func Run(ctx context.Context, args config.Args) error {
* 📈 Lines of Code: 1586
* 📄 Lines of Documentation: 154
* 📅 Development Period: 2023-01-02 to 2025-07-12
-* 🔥 Recent Activity: 121.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 121.8 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v0.1.0 (2025-07-12)
@@ -458,7 +401,7 @@ sub write ( $path, $content ) {
* 📈 Lines of Code: 1373
* 📄 Lines of Documentation: 48
* 📅 Development Period: 2024-12-05 to 2025-02-28
-* 🔥 Recent Activity: 141.1 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 141.6 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -470,44 +413,53 @@ The system is implemented with a modular architecture centered around a DSL clas
=> https://codeberg.org/snonux/rcm View on Codeberg
=> https://github.com/snonux/rcm View on GitHub
-Ruby from `lib/dslkeywords/file.rb`:
+Ruby from `lib/dslkeywords/given.rb`:
```AUTO
-def mode(what) = @mode = what
-def owner(what) = @owner = what
-def group(what) = @group = what
-
-def evaluate!
- unless super
- @mode = nil
- return false
- end
+def respond_to_missing? = true
+
+def met?
+ return false if @conds.key?(:hostname) && Socket.gethostname !=
+ @conds[:hostname].to_s
+
true
end
+```
-def content(text = nil)
- if text.nil?
- text = @from == :sourcefile ? ::File.read(@content) : @content
- return @from == :template ? ERB.new(text).result : text
- end
- @content = text.instance_of?(Array) ? text.join("\n") : text
-end
+---
+
+### gemtexter
-protected
+* 💻 Languages: Shell (68.1%), CSS (28.7%), Config (1.9%), HTML (1.3%)
+* 📚 Documentation: Text (76.1%), Markdown (23.9%)
+* 📊 Commits: 465
+* 📈 Lines of Code: 2268
+* 📄 Lines of Documentation: 1180
+* 📅 Development Period: 2021-05-21 to 2025-07-09
+* 🔥 Recent Activity: 204.0 days (avg. age of last 42 commits)
+* ⚖️ License: GPL-3.0
+* 🏷️ Latest Release: 3.0.0 (2024-10-01)
-def permissions!(file_path = path)
- return unless ::File.exist?(file_path)
- stat = ::File.stat(file_path)
- set_mode!(stat)
- set_owner!(stat)
-end
+**Gemtexter** is a static site generator and blog engine that transforms content written in Gemini Gemtext format into multiple output formats. It's a comprehensive Bash-based tool designed to support the Gemini protocol (a simpler alternative to HTTP) while maintaining compatibility with traditional web technologies. The project converts a single source of Gemtext content into HTML (XHTML 1.0 Transitional), Markdown, and native Gemtext formats, enabling authors to write once and publish across multiple platforms including Gemini capsules, traditional websites, and GitHub/Codeberg pages.
+
+The implementation is built entirely in Bash (version 5.x+) using a modular library approach with separate source files for different functionality (atomfeed, gemfeed, HTML generation, Markdown conversion, templating, etc.). Key features include automatic blog post indexing, Atom feed generation, customizable HTML themes, source code highlighting, Bash-based templating system, and integrated Git workflow management. The architecture separates content directories by format (gemtext/, html/, md/) and includes comprehensive theming support, font embedding, and publishing workflows that can automatically sync content to multiple Git repositories for deployment on various platforms.
+
+=> https://codeberg.org/snonux/gemtexter View on Codeberg
+=> https://github.com/snonux/gemtexter View on GitHub
+
+Shell from `lib/generate.source.sh`:
+
+```AUTO
+done < <(find "$CONTENT_BASE_DIR/gemtext" -type f -name \*.gmi)
+
+wait
+log INFO "Converted $num_gmi_files Gemtext files"
-def validate(method, what, *valids)
- return what if valids.include?(what)
+log VERBOSE "Adding other docs to $*"
- raise UnsupportedOperation,
- "Unsupported '#{method}' operation #{what} (#{what.class})"
+while read -r src; do
+ num_doc_files=$(( num_doc_files + 1 ))
```
---
@@ -520,7 +472,7 @@ def validate(method, what, *valids)
* 📈 Lines of Code: 917
* 📄 Lines of Documentation: 33
* 📅 Development Period: 2024-01-20 to 2025-07-06
-* 🔥 Recent Activity: 451.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 451.6 days (avg. age of last 42 commits)
* ⚖️ License: MIT
* 🏷️ Latest Release: v0.0.3 (2025-07-06)
@@ -591,7 +543,7 @@ func createPreferenceWindow(a fyne.App) fyne.Window {
* 📈 Lines of Code: 12
* 📄 Lines of Documentation: 3
* 📅 Development Period: 2024-03-24 to 2024-03-24
-* 🔥 Recent Activity: 474.9 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 475.4 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -628,7 +580,7 @@ aws: build
* 📈 Lines of Code: 2850
* 📄 Lines of Documentation: 52
* 📅 Development Period: 2023-08-27 to 2025-04-05
-* 🔥 Recent Activity: 504.9 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 505.4 days (avg. age of last 42 commits)
* ⚖️ License: MIT
* 🧪 Status: Experimental (no releases yet)
@@ -640,19 +592,18 @@ The system is designed to host multiple personal services including Anki sync se
=> https://codeberg.org/snonux/terraform View on Codeberg
=> https://github.com/snonux/terraform View on GitHub
-HCL from `org-buetow-eks/remotestates.tf`:
+HCL from `org-buetow-base/ecr.tf`:
```AUTO
-data "terraform_remote_state" "base" {
- backend = "s3"
- config = {
- bucket = "org-buetow-tfstate"
- key = "org-buetow-base/terraform.tfstate"
- region = "eu-central-1"
+resource "aws_ecr_repository" "radicale-read" {
+ name = "radicale"
+
+ tags = {
+ Name = "radicale"
}
}
-data "terraform_remote_state" "elb" {
+resource "aws_iam_policy" "ecr_radicale_read" {
```
---
@@ -665,7 +616,7 @@ data "terraform_remote_state" "elb" {
* 📈 Lines of Code: 1096
* 📄 Lines of Documentation: 287
* 📅 Development Period: 2023-04-17 to 2025-06-12
-* 🔥 Recent Activity: 517.7 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 518.3 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v1.1.0 (2024-05-03)
* 🤖 AI-Assisted: This project was partially created with the help of generative AI
@@ -680,48 +631,32 @@ The implementation follows a clean architecture with concurrent check execution,
=> https://codeberg.org/snonux/gogios View on Codeberg
=> https://github.com/snonux/gogios View on GitHub
-Go from `internal/state.go`:
+Go from `internal/check.go`:
```AUTO
-func newState(conf config) (state, error) {
- s := state{
- stateFile: fmt.Sprintf("%s/state.json", conf.StateDir),
- checks: make(map[string]checkState),
- staleEpoch: time.Now().Unix() - int64(conf.StaleThreshold),
- }
-
- if _, err := os.Stat(s.stateFile); err != nil {
- return s, nil
- }
-
- file, err := os.Open(s.stateFile)
- if err != nil {
- return s, err
- }
- defer file.Close()
-
- bytes, err := io.ReadAll(file)
- if err != nil {
- return s, err
- }
+func (c check) run(ctx context.Context, name string) checkResult {
+ cmd := exec.CommandContext(ctx, c.Plugin, c.Args...)
- if err := json.Unmarshal(bytes, &s.checks); err != nil {
- return s, err
- }
+ var bytes bytes.Buffer
+ cmd.Stdout = &bytes
+ cmd.Stderr = &bytes
- var obsolete []string
- for name := range s.checks {
- if _, ok := conf.Checks[name]; !ok {
- obsolete = append(obsolete, name)
+ if err := cmd.Run(); err != nil {
+ if ctx.Err() == context.DeadlineExceeded {
+ return checkResult{name, "Check command timed out", time.Now().Unix(),
+ nagiosCritical, false}
}
}
- for _, name := range obsolete {
- delete(s.checks, name)
- log.Printf("State of %s is obsolete (removed)", name)
+ parts := strings.Split(bytes.String(), "|")
+ output := strings.TrimSpace(parts[0])
+
+ ec := cmd.ProcessState.ExitCode()
+ if ec < int(nagiosOk) || ec > int(nagiosUnknown) {
+ ec = int(nagiosUnknown)
}
- return s, nil
+ return checkResult{name, output, time.Now().Unix(), nagiosCode(ec), false}
}
```
@@ -735,7 +670,7 @@ func newState(conf config) (state, error) {
* 📈 Lines of Code: 32
* 📄 Lines of Documentation: 3
* 📅 Development Period: 2023-12-31 to 2023-12-31
-* 🔥 Recent Activity: 558.4 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 558.9 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -772,7 +707,7 @@ run: build
* 📈 Lines of Code: 29
* 📄 Lines of Documentation: 3
* 📅 Development Period: 2023-08-13 to 2024-01-01
-* 🔥 Recent Activity: 651.6 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 652.2 days (avg. age of last 42 commits)
* ⚖️ License: MIT
* 🧪 Status: Experimental (no releases yet)
@@ -810,7 +745,7 @@ aws:
* 📈 Lines of Code: 1525
* 📄 Lines of Documentation: 15
* 📅 Development Period: 2023-04-17 to 2023-11-19
-* 🔥 Recent Activity: 703.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 704.3 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -822,33 +757,16 @@ The architecture consists of several key components: a quorum manager that handl
=> https://codeberg.org/snonux/gorum View on Codeberg
=> https://github.com/snonux/gorum View on GitHub
-Go from `internal/notifier/email.go`:
+Go from `internal/vote/vote.go`:
```AUTO
-func (em email) send(conf config.Config) error {
- if !conf.EmailNotifycationEnabled() {
- return nil
- }
- log.Println("notify:", em.subject, em.body)
-
- headers := map[string]string{
- "From": conf.EmailFrom,
- "To": conf.EmailTo,
- "Subject": em.subject,
- "MIME-Version": "1.0",
- "Content-Type": "text/plain; charset=\"utf-8\"",
- }
-
- header := ""
- for k, v := range headers {
- header += fmt.Sprintf("%s: %s\r\n", k, v)
- }
+func New(conf config.Config, ids ...string) (Vote, error) {
+ var v Vote
- message := header + "\r\n" + em.body
- log.Println("Using SMTP server", conf.SMTPServer)
+ v.FromID = conf.MyID
+ v.IDs = ids
- return smtp.SendMail(conf.SMTPServer, nil, conf.EmailFrom,
- []string{conf.EmailTo}, []byte(message))
+ return v, nil
}
```
@@ -862,7 +780,7 @@ func (em email) send(conf config.Config) error {
* 📈 Lines of Code: 312
* 📄 Lines of Documentation: 416
* 📅 Development Period: 2013-03-22 to 2025-05-18
-* 🔥 Recent Activity: 753.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 754.3 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: v1.0.0 (2023-04-29)
@@ -905,7 +823,7 @@ method output-trim(Str \str, UInt \line-limit --> Str) {
* 📈 Lines of Code: 51
* 📄 Lines of Documentation: 26
* 📅 Development Period: 2022-06-02 to 2024-04-20
-* 🔥 Recent Activity: 768.5 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 769.1 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -930,40 +848,58 @@ declare -i NUM_PAGES_TO_EXTRACT=42 # This is the answear!
---
-### gemtexter
+### dtail
-* 💻 Languages: Shell (85.6%), CSS (9.0%), Config (3.3%), HTML (2.1%)
-* 📚 Documentation: Text (71.7%), Markdown (28.3%)
-* 📊 Commits: 465
-* 📈 Lines of Code: 1451
-* 📄 Lines of Documentation: 738
-* 📅 Development Period: 2021-05-21 to 2023-03-31
-* 🔥 Recent Activity: 853.0 days (avg. age of last 42 commits)
-* ⚖️ License: Custom License
-* 🏷️ Latest Release: 3.0.0 (2024-10-01)
+* 💻 Languages: Go (91.1%), JSON (4.1%), C (2.9%), Make (0.6%), C/C++ (0.5%), Config (0.3%), Shell (0.2%), Docker (0.2%)
+* 📚 Documentation: Text (80.4%), Markdown (19.6%)
+* 📊 Commits: 1049
+* 📈 Lines of Code: 13525
+* 📄 Lines of Documentation: 5375
+* 📅 Development Period: 2020-01-09 to 2023-10-05
+* 🔥 Recent Activity: 781.8 days (avg. age of last 42 commits)
+* ⚖️ License: Apache-2.0
+* 🏷️ Latest Release: v4.2.0 (2023-06-21)
⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
-**Gemtexter** is a static site generator and blog engine that transforms content written in Gemini Gemtext format into multiple output formats. It's a comprehensive Bash-based tool designed to support the Gemini protocol (a simpler alternative to HTTP) while maintaining compatibility with traditional web technologies. The project converts a single source of Gemtext content into HTML (XHTML 1.0 Transitional), Markdown, and native Gemtext formats, enabling authors to write once and publish across multiple platforms including Gemini capsules, traditional websites, and GitHub/Codeberg pages.
+=> showcase/dtail/image-1.png dtail screenshot
-The implementation is built entirely in Bash (version 5.x+) using a modular library approach with separate source files for different functionality (atomfeed, gemfeed, HTML generation, Markdown conversion, templating, etc.). Key features include automatic blog post indexing, Atom feed generation, customizable HTML themes, source code highlighting, Bash-based templating system, and integrated Git workflow management. The architecture separates content directories by format (gemtext/, html/, md/) and includes comprehensive theming support, font embedding, and publishing workflows that can automatically sync content to multiple Git repositories for deployment on various platforms.
+DTail is a distributed log processing system written in Go that allows DevOps engineers to tail, cat, and grep log files across thousands of servers concurrently. It provides secure access through SSH authentication and respects UNIX file system permissions, making it ideal for enterprise environments where log analysis needs to scale horizontally across large server fleets. The tool supports advanced features like compressed file handling (gzip/zstd) and distributed MapReduce aggregations for complex log analytics.
-=> https://codeberg.org/snonux/gemtexter View on Codeberg
-=> https://github.com/snonux/gemtexter View on GitHub
+=> showcase/dtail/image-2.gif dtail screenshot
+
+The system uses a client-server architecture where dtail servers run on target machines (listening on port 2222) and clients connect to multiple servers simultaneously. It can also operate in serverless mode for local operations. The implementation leverages SSH for secure communication, includes sophisticated connection throttling and resource management, and provides specialized tools (dcat, dgrep, dmap) for different log processing tasks. The MapReduce functionality supports SQL-like queries with server-side local aggregation and client-side final aggregation, enabling powerful distributed analytics across log data.
+
+=> https://codeberg.org/snonux/dtail View on Codeberg
+=> https://github.com/snonux/dtail View on GitHub
-Shell from `lib/html.source.sh`:
+Go from `internal/io/fs/readfilelcontext.go`:
```AUTO
- done < <(find "$html_base_dir" -mindepth 1 -maxdepth 1 -type d | $GREP -E
- -v '(\.git)')
- cp "$HTML_WEBFONT_TEXT" "$html_base_dir/text.ttf"
- cp "$HTML_WEBFONT_CODE" "$html_base_dir/code.ttf"
- cp "$HTML_WEBFONT_HANDNOTES" "$html_base_dir/handnotes.ttf"
- cp "$HTML_WEBFONT_TYPEWRITER" "$html_base_dir/typewriter.ttf"
-}
+func (f *readFile) lContextNotMatched(ctx context.Context, ls *ltxState,
+ lines chan<- *line.Line, rawLine *bytes.Buffer) readStatus {
+
+ if ls.processAfter && ls.after > 0 {
+ ls.after--
+ myLine := line.New(rawLine, f.totalLineCount(), 100, f.globID)
+
+ select {
+ case lines <- myLine:
+ case <-ctx.Done():
+ return abortReading
+ }
+
+ } else if ls.processBefore {
+ select {
+ case ls.beforeBuf <- rawLine:
+ default:
+ pool.RecycleBytesBuffer(<-ls.beforeBuf)
+ ls.beforeBuf <- rawLine
+ }
+ }
-html::fromgmi () {
- local is_list=no
+ return continueReading
+}
```
---
@@ -976,7 +912,7 @@ html::fromgmi () {
* 📈 Lines of Code: 41
* 📄 Lines of Documentation: 17
* 📅 Development Period: 2020-01-30 to 2025-04-30
-* 🔥 Recent Activity: 1062.1 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1062.6 days (avg. age of last 42 commits)
* ⚖️ License: GPL-3.0
* 🧪 Status: Experimental (no releases yet)
@@ -1002,6 +938,48 @@ declare -r SCREEN=eDP-1
---
+### photoalbum
+
+* 💻 Languages: Shell (80.1%), Make (12.3%), Config (7.6%)
+* 📚 Documentation: Markdown (100.0%)
+* 📊 Commits: 153
+* 📈 Lines of Code: 342
+* 📄 Lines of Documentation: 39
+* 📅 Development Period: 2011-11-19 to 2022-04-02
+* 🔥 Recent Activity: 1282.2 days (avg. age of last 42 commits)
+* ⚖️ License: No license found
+* 🏷️ Latest Release: 0.5.0 (2022-02-21)
+
+⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
+
+PhotoAlbum is a minimal Bash script for Unix-like systems that generates static web photo albums from directories of images. It creates pure HTML+CSS galleries without JavaScript, making them lightweight and universally compatible. The tool is designed for simplicity and portability - users point it at a directory of photos, configure basic settings like thumbnail size and gallery title, and it automatically generates a complete static website with image previews, navigation, and optional download archives.
+
+The implementation centers around a single Bash script (`photoalbum.sh`) that uses ImageMagick's `convert` command to generate thumbnails and resized images, then applies customizable HTML templates to create the gallery structure. The architecture separates configuration (via `photoalbumrc` files), templating (modular `.tmpl` files for different page components), and processing logic, allowing users to customize the appearance while maintaining the core functionality. The generated output is a self-contained `dist` directory that can be deployed to any static web server.
+
+=> https://codeberg.org/snonux/photoalbum View on Codeberg
+=> https://github.com/snonux/photoalbum View on GitHub
+
+Shell from `src/photoalbum.sh`:
+
+```AUTO
+ for sub in thumbs blurs photos; do
+ if [ -f "$DIST_DIR/$sub/$basename" ]; then
+ rm -v "$DIST_DIR/$sub/$basename"
+ fi
+ done
+ done
+}
+
+scalephotos () {
+ cd "$INCOMING_DIR" && find ./ -maxdepth 1 -type f | sort |
+ while read -r photo; do
+ declare photo="$(sed 's#^\./##' <<< "$photo")"
+ declare destphoto="$DIST_DIR/photos/$photo"
+ declare destphoto_nospace="${destphoto// /_}"
+```
+
+---
+
### algorithms
* 💻 Languages: Go (99.2%), Make (0.8%)
@@ -1010,7 +988,7 @@ declare -r SCREEN=eDP-1
* 📈 Lines of Code: 1728
* 📄 Lines of Documentation: 18
* 📅 Development Period: 2020-07-12 to 2023-04-09
-* 🔥 Recent Activity: 1432.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1433.3 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -1023,22 +1001,16 @@ The project leverages Go's generics system to provide type-safe implementations
=> https://codeberg.org/snonux/algorithms View on Codeberg
=> https://github.com/snonux/algorithms View on GitHub
-Go from `search/bst.go`:
+Go from `queue/priority.go`:
```AUTO
-func (n *node[K,V]) String() string {
- recurse := func(n *node[K,V]) string {
- if n == nil {
- return ""
- }
- return n.String()
- }
-
- return fmt.Sprintf("node[K,V]{%v:%v,%s,%s}",
- n.key,
- n.val,
- recurse(n.left),
- recurse(n.right))
+type PriorityQueue interface {
+ Insert(a int)
+ Max() (max int)
+ DeleteMax() int
+ Empty() bool
+ Size() int
+ Clear()
}
```
@@ -1052,7 +1024,7 @@ func (n *node[K,V]) String() string {
* 📈 Lines of Code: 671
* 📄 Lines of Documentation: 19
* 📅 Development Period: 2018-05-26 to 2025-01-21
-* 🔥 Recent Activity: 1434.6 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1435.1 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1081,11 +1053,11 @@ def out(message, prefix, flag = :none)
### foo.zone
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 2908
+* 📊 Commits: 2911
* 📈 Lines of Code: 0
* 📄 Lines of Documentation: 23
* 📅 Development Period: 2021-05-21 to 2022-04-02
-* 🔥 Recent Activity: 1448.4 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1448.9 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1108,7 +1080,7 @@ The site is built using **Gemtexter**, a static site generator that creates both
* 📈 Lines of Code: 51
* 📄 Lines of Documentation: 69
* 📅 Development Period: 2014-03-24 to 2022-04-23
-* 🔥 Recent Activity: 1913.7 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 1914.3 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1142,7 +1114,7 @@ sub hello() {
* 📈 Lines of Code: 12420
* 📄 Lines of Documentation: 610
* 📅 Development Period: 2018-03-01 to 2020-01-22
-* 🔥 Recent Activity: 2455.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 2455.8 days (avg. age of last 42 commits)
* ⚖️ License: Apache-2.0
* 🏷️ Latest Release: 0.5.1 (2019-01-04)
@@ -1159,44 +1131,6 @@ The tool is implemented in C for minimal overhead and uses SystemTap for efficie
---
-### photoalbum
-
-* 💻 Languages: Shell (78.1%), Make (13.5%), Config (8.4%)
-* 📚 Documentation: Text (100.0%)
-* 📊 Commits: 153
-* 📈 Lines of Code: 311
-* 📄 Lines of Documentation: 45
-* 📅 Development Period: 2011-11-19 to 2022-02-20
-* 🔥 Recent Activity: 2879.8 days (avg. age of last 42 commits)
-* ⚖️ License: No license found
-* 🏷️ Latest Release: 0.5.0 (2022-02-21)
-
-⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
-
-PhotoAlbum is a minimal Bash script for Unix-like systems that generates static web photo albums from directories of images. It creates pure HTML+CSS galleries without JavaScript, making them lightweight and universally compatible. The tool is designed for simplicity and portability - users point it at a directory of photos, configure basic settings like thumbnail size and gallery title, and it automatically generates a complete static website with image previews, navigation, and optional download archives.
-
-The implementation centers around a single Bash script (`photoalbum.sh`) that uses ImageMagick's `convert` command to generate thumbnails and resized images, then applies customizable HTML templates to create the gallery structure. The architecture separates configuration (via `photoalbumrc` files), templating (modular `.tmpl` files for different page components), and processing logic, allowing users to customize the appearance while maintaining the core functionality. The generated output is a self-contained `dist` directory that can be deployed to any static web server.
-
-=> https://codeberg.org/snonux/photoalbum View on Codeberg
-=> https://github.com/snonux/photoalbum View on GitHub
-
-Shell from `src/photoalbum.sh`:
-
-```AUTO
- find "$DIST_DIR" -maxdepth 1 -type f -name \*.tar -delete
- declare base="$(basename "$INCOMING_DIR")"
-
- echo "Creating tarball $DIST_DIR/$tarball_name from $INCOMING_DIR"
- cd "$(dirname "$INCOMING_DIR")"
- tar "$TAR_OPTS" -f "$DIST_DIR/$tarball_name" "$base"
- cd - &>/dev/null
-}
-
-template () {
-```
-
----
-
### staticfarm-apache-handlers
* 💻 Languages: Perl (96.4%), Make (3.6%)
@@ -1205,7 +1139,7 @@ template () {
* 📈 Lines of Code: 919
* 📄 Lines of Documentation: 12
* 📅 Development Period: 2015-01-02 to 2021-11-04
-* 🔥 Recent Activity: 2964.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 2964.5 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 1.1.3 (2015-01-02)
@@ -1218,13 +1152,51 @@ The system is particularly useful for distributed static content delivery where
=> https://codeberg.org/snonux/staticfarm-apache-handlers View on Codeberg
=> https://github.com/snonux/staticfarm-apache-handlers View on GitHub
-Perl from `debian/staticfarm-apache-handlers/usr/share/staticfarm/apache/handlers/StaticFarm/CacheControl.pm`:
+Perl from `debian/staticfarm-apache-handlers/usr/share/staticfarm/apache/handlers/StaticFarm/API.pm`:
```AUTO
-sub my_warn {
- my $msg = shift;
+sub handler {
+ my $r = shift;
+ $r->content_type('application/json');
+
+ my $method = $r->method();
+
+ my $d = {
+ method => $method,
+ uri => $r->uri(),
+ args => $r->args(),
+ out => { message => "" },
+ };
+
+ ($d->{path}) = $r->uri() =~ /^$URI_PREFIX(.*)/;
+ $d->{fullpath} = "$CONTENT_DIR$d->{path}";
+
+ my %params = map {
+ s/\.\.//g;
+ my ($k, $v) = split '=', $_;
+ $v
+ $k => $v;
+ } split '&', $r->args();
+
+ $d->{params} = \%params;
+
+ if ($method eq 'GET') {
+ handler_get($r, $d);
+
+ } elsif ($method eq 'DELETE') {
+ handler_delete($r, $d);
+
+ } elsif ($method eq 'POST') {
+ handler_post($r, $d);
+
+ } elsif ($method eq 'PUT') {
+ handler_put($r, $d);
- Apache2::ServerRec::warn("CacheControl: $msg");
+ } else {
+ handler_unknown($r, $d);
+ }
+
+ return Apache2::Const::DONE;
}
```
@@ -1238,7 +1210,7 @@ sub my_warn {
* 📈 Lines of Code: 18
* 📄 Lines of Documentation: 49
* 📅 Development Period: 2014-03-24 to 2021-11-05
-* 🔥 Recent Activity: 3199.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3200.4 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1261,7 +1233,7 @@ The implementation consists of a shell script (`update-dyndns`) that accepts hos
* 📈 Lines of Code: 5360
* 📄 Lines of Documentation: 789
* 📅 Development Period: 2015-01-02 to 2021-11-05
-* 🔥 Recent Activity: 3466.5 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3467.1 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 1.0.1 (2015-01-02)
@@ -1274,33 +1246,18 @@ The tool is particularly useful for system administrators and DevOps engineers w
=> https://codeberg.org/snonux/mon View on Codeberg
=> https://github.com/snonux/mon View on GitHub
-Perl from `debian/mon/usr/share/mon/lib/MAPI/Config.pm`:
+Perl from `lib/MON/Cache.pm`:
```AUTO
sub new {
my ( $class, %opts ) = @_;
my $self = bless \%opts, $class;
- my $options = $self->{options};
-
- $options->store_first($self);
-
- $self->SUPER::init(%opts);
-
- for ( @{ $options->{unknown} } ) {
- $self->error("Unknown option: $_");
- }
-
- if ( $self->{'config'} ne '' ) {
- $self->read_config( $self->{'config'} );
- }
- elsif ( exists $ENV{MON_CONFIG} ) {
- $self->read_config( $ENV{MON_CONFIG} );
+ $self->init();
- }
- else {
- $self->read_config('/etc/mon.conf');
+ return $self;
+}
```
---
@@ -1313,7 +1270,7 @@ sub new {
* 📈 Lines of Code: 273
* 📄 Lines of Documentation: 32
* 📅 Development Period: 2015-09-29 to 2021-11-05
-* 🔥 Recent Activity: 3470.7 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3471.2 days (avg. age of last 42 commits)
* ⚖️ License: Apache-2.0
* 🏷️ Latest Release: 0 (2015-10-26)
@@ -1349,7 +1306,7 @@ def initialize
* 📈 Lines of Code: 1839
* 📄 Lines of Documentation: 412
* 📅 Development Period: 2015-01-02 to 2021-11-05
-* 🔥 Recent Activity: 3550.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3550.9 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 1.0.2 (2015-01-02)
@@ -1362,14 +1319,22 @@ The project is implemented as a modular Perl application with a clean architectu
=> https://codeberg.org/snonux/pingdomfetch View on Codeberg
=> https://github.com/snonux/pingdomfetch View on GitHub
-Perl from `lib/PINGDOMFETCH/TLS.pm`:
+Perl from `lib/PINGDOMFETCH/Pingdomfetch.pm`:
```AUTO
sub new {
- my ( $class, %vals ) = @_;
+ my ( $class, $opts ) = @_;
+
+ my $config = PINGDOMFETCH::Config->new($opts);
+ my $pingdom = PINGDOMFETCH::Pingdom->new($config);
- my $self = bless \%vals, $class;
- $self->{is_critical} = 0;
+ my $self = bless {
+ config => $config,
+ pingdom => $pingdom,
+ dots_counter => 0,
+ }, $class;
+
+ $self->init_from_to_interval();
return $self;
}
@@ -1380,12 +1345,12 @@ sub new {
### gotop
* 💻 Languages: Go (98.0%), Make (2.0%)
-* 📚 Documentation: Text (50.0%), Markdown (50.0%)
+* 📚 Documentation: Markdown (50.0%), Text (50.0%)
* 📊 Commits: 57
* 📈 Lines of Code: 499
* 📄 Lines of Documentation: 8
* 📅 Development Period: 2015-05-24 to 2021-11-03
-* 🔥 Recent Activity: 3561.1 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3561.6 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.1 (2015-06-01)
@@ -1398,23 +1363,40 @@ The implementation follows a concurrent architecture using Go's goroutines and c
=> https://codeberg.org/snonux/gotop View on Codeberg
=> https://github.com/snonux/gotop View on GitHub
-Go from `utils/utils.go`:
+Go from `process/process.go`:
```AUTO
-func Slurp(what *string, path string) error {
- bytes, err := ioutil.ReadFile(path)
+func new(pidstr string) (Process, error) {
+ pid, err := strconv.Atoi(pidstr)
if err != nil {
- return err
+ return Process{}, err
+ }
+
+ timestamp := int32(time.Now().Unix())
+ p := Process{Pid: pid, Timestamp: timestamp}
+ var rawIo string
+
+ if err = utils.Slurp(&rawIo, fmt.Sprintf("/proc/%d/io", pid)); err != nil {
+ return p, err
+ }
+
+ if err = p.parseRawIo(rawIo); err != nil {
+ return p, err
+ }
+
+ if err = utils.Slurp(&p.Comm, fmt.Sprintf("/proc/%d/comm", pid)); err != nil {
+ return p, err
+ }
+
+ err = utils.Slurp(&p.Cmdline, fmt.Sprintf("/proc/%d/cmdline", pid))
+
+ if p.Cmdline == "" {
+ p.Id = fmt.Sprintf("(%s) %s", pidstr, p.Comm)
} else {
- for _, byte := range bytes {
- if byte == 0 {
- *what += " "
- } else {
- *what += string(byte)
- }
- }
+ p.Id = fmt.Sprintf("(%s) %s", pidstr, p.Cmdline)
}
- return nil
+
+ return p, err
}
```
@@ -1426,7 +1408,7 @@ func Slurp(what *string, path string) error {
* 📊 Commits: 670
* 📈 Lines of Code: 1675
* 📅 Development Period: 2011-03-06 to 2018-12-22
-* 🔥 Recent Activity: 3616.7 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3617.2 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v1.0.0 (2018-12-22)
@@ -1441,34 +1423,18 @@ The system works through a template-driven architecture where content is written
=> https://codeberg.org/snonux/xerl View on Codeberg
=> https://github.com/snonux/xerl View on GitHub
-Perl from `Xerl/Page/Menu.pm`:
+Perl from `Xerl/XML/Reader.pm`:
```AUTO
-sub generate {
- my $self = $_[0];
- my $config = $self->get_config();
-
- my @site = split /\//, $config->get_site();
- my @compare = @site;
- my $site = pop @site;
-
- my ( $content, $siteadd ) = ( 'content/', '' );
+sub open {
+ my $self = shift;
- my $menuelem = $self->get_menu( $content, $siteadd, shift @compare );
-
- $self->push_array($menuelem)
- if $menuelem->first_array()->array_length() > 1;
-
- for my $s (@site) {
- $content .= "$s.sub/";
- $siteadd .= "$s/";
-
- $menuelem = $self->get_menu( $content, $siteadd, shift @compare );
- $self->push_array($menuelem)
- if $menuelem->first_array()->array_length() > 1;
+ if ( -f $self->get_path() ) {
+ return 0;
+ }
+ else {
+ return 1;
}
-
- return undef;
}
```
@@ -1482,7 +1448,7 @@ sub generate {
* 📈 Lines of Code: 88
* 📄 Lines of Documentation: 148
* 📅 Development Period: 2015-06-18 to 2015-12-05
-* 🔥 Recent Activity: 3664.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3665.3 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1520,7 +1486,7 @@ done
* 📈 Lines of Code: 1681
* 📄 Lines of Documentation: 539
* 📅 Development Period: 2014-03-10 to 2021-11-03
-* 🔥 Recent Activity: 3942.8 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3943.3 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 1.0.2 (2014-11-17)
@@ -1533,15 +1499,21 @@ The implementation is written in Python and built on top of the bigsuds library,
=> https://codeberg.org/snonux/fapi View on Codeberg
=> https://github.com/snonux/fapi View on GitHub
-Python from `contrib/bigsuds-1.0/setup.py`:
+Python from `contrib/bigsuds-1.0/bigsuds.py`:
```AUTO
-def extract_version(filename):
- contents = open(filename).read()
- match = re.search('^__version__\s+=\s+[\'"](.*)[\'"]\s*$', contents,
- re.MULTILINE)
- if match is not None:
- return match.group(1)
+class ArgumentError(OperationFailed):
+ are passed to an iControl method."""
+
+
+class BIGIP(object):
+
+ Example usage:
+ >>> b = BIGIP('bigip-hostname')
+ >>> print b.LocalLB.Pool.get_list()
+ ['/Common/test_pool']
+ >>> b.LocalLB.Pool.add_member(['/Common/test_pool'], \
+ [[{'address': '10.10.10.10', 'port': 20030}]])
```
---
@@ -1554,7 +1526,7 @@ def extract_version(filename):
* 📈 Lines of Code: 65
* 📄 Lines of Documentation: 228
* 📅 Development Period: 2013-03-22 to 2021-11-04
-* 🔥 Recent Activity: 3997.2 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 3997.8 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.0.0.0 (2013-03-22)
@@ -1589,7 +1561,7 @@ build:
* 📈 Lines of Code: 136
* 📄 Lines of Documentation: 96
* 📅 Development Period: 2013-03-22 to 2021-11-05
-* 🔥 Recent Activity: 4010.2 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4010.8 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.2.0 (2014-07-05)
@@ -1624,7 +1596,7 @@ build:
* 📈 Lines of Code: 134
* 📄 Lines of Documentation: 106
* 📅 Development Period: 2013-03-22 to 2021-11-05
-* 🔥 Recent Activity: 4017.7 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4018.2 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.1.5 (2014-06-22)
@@ -1647,7 +1619,7 @@ The tool works by having both hosts run the same command simultaneously - one ac
* 📈 Lines of Code: 493
* 📄 Lines of Documentation: 26
* 📅 Development Period: 2009-09-27 to 2021-11-02
-* 🔥 Recent Activity: 4061.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4061.5 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.9.3 (2014-06-14)
@@ -1687,7 +1659,7 @@ function findbin () {
* 📈 Lines of Code: 286
* 📄 Lines of Documentation: 144
* 📅 Development Period: 2013-03-22 to 2021-11-05
-* 🔥 Recent Activity: 4066.0 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4066.6 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.4.3 (2014-06-16)
@@ -1710,7 +1682,7 @@ The implementation uses modern Perl with the Moo object system and consists of t
* 📈 Lines of Code: 191
* 📄 Lines of Documentation: 8
* 📅 Development Period: 2014-03-24 to 2014-03-24
-* 🔥 Recent Activity: 4127.3 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4127.8 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -1723,19 +1695,18 @@ Each script explores different themes - Christmas celebrations, mathematical stu
=> https://codeberg.org/snonux/perl-poetry View on Codeberg
=> https://github.com/snonux/perl-poetry View on GitHub
-Perl from `travel.pl`:
+Perl from `perllove.pl`:
```AUTO
-do { sub travel { to => stop,off } }; foreach (@location) {};
-
-far_away: { is => our $destiny } foreach @personality;
-for $the (@souls) { its => our $path };
-
-do { study and s/eek// for @wisdom };
-do { require strict; import { of, tied $power } };
-
-local $robber, do kill unless tied $power;
-no warnings; do { alarm $us };
+no strict;
+no warnings;
+we: do { print 'love'
+or warn and die 'slow'
+unless not defined true #respect
+} for reverse'd', qw/mind of you/
+and map { 'me' } 'into', undef $mourning;
+__END__
+v2 Copyright (2005, 2006) by Paul C. Buetow, http://paul.buetow.org
```
---
@@ -1746,7 +1717,7 @@ no warnings; do { alarm $us };
* 📊 Commits: 7
* 📈 Lines of Code: 80
* 📅 Development Period: 2011-07-09 to 2015-01-13
-* 🔥 Recent Activity: 4207.4 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4207.9 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -1791,38 +1762,15 @@ if ($ENV{SERVER_NAME} eq 'ipv6.buetow.org') {
---
-### cpuinfo
-
-* 💻 Languages: Shell (53.2%), Make (46.8%)
-* 📚 Documentation: Text (100.0%)
-* 📊 Commits: 28
-* 📈 Lines of Code: 124
-* 📄 Lines of Documentation: 75
-* 📅 Development Period: 2010-11-05 to 2021-11-05
-* 🔥 Recent Activity: 4248.0 days (avg. age of last 42 commits)
-* ⚖️ License: No license found
-* 🏷️ Latest Release: 1.0.2 (2014-06-22)
-
-⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
-
-**cpuinfo** is a small command-line utility that provides a human-readable summary of CPU information on Linux systems. It parses `/proc/cpuinfo` using AWK to extract and display key processor details including the CPU model, cache size, number of physical processors, cores, and whether hyper-threading is enabled. The tool calculates total CPU frequency and bogomips across all cores, making it easier to understand complex multi-core and multi-processor configurations at a glance.
-
-The implementation is remarkably simple - a single shell script that uses GNU AWK to parse the kernel's CPU information and format it into a clear, structured output. It's particularly useful for system administrators and developers who need to quickly understand CPU topology, especially on servers with multiple processors or complex threading configurations where the raw `/proc/cpuinfo` output can be overwhelming.
-
-=> https://codeberg.org/snonux/cpuinfo View on Codeberg
-=> https://github.com/snonux/cpuinfo View on GitHub
-
----
-
### loadbars
* 💻 Languages: Perl (97.4%), Make (2.6%)
-* 📚 Documentation: Text (100.0%)
+* 📚 Documentation: Text (93.5%), Markdown (6.5%)
* 📊 Commits: 527
* 📈 Lines of Code: 1828
-* 📄 Lines of Documentation: 100
+* 📄 Lines of Documentation: 107
* 📅 Development Period: 2010-11-05 to 2015-05-23
-* 🔥 Recent Activity: 4278.1 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4215.4 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: 0.7.5 (2014-06-22)
@@ -1835,20 +1783,39 @@ The application is implemented using a multi-threaded architecture where each mo
=> https://codeberg.org/snonux/loadbars View on Codeberg
=> https://github.com/snonux/loadbars View on GitHub
-Perl from `lib/Loadbars/Constants.pm`:
+Perl from `lib/Loadbars/Utils.pm`:
```AUTO
-use strict;
-use warnings;
+sub trim (\$) {
+ my $str = shift;
+ $$str =~ s/^[\s\t]+//;
+ $$str =~ s/[\s\t]+$//;
+ return undef;
+}
+```
-use SDL::Color;
+---
-use constant {
- COPYRIGHT => '2010-2013 (c) Paul Buetow <loadbars@mx.buetow.org>',
- CONFFILE => $ENV{HOME} . '/.loadbarsrc',
- CSSH_CONFFILE => '/etc/clusters',
- CSSH_MAX_RECURSION => 10,
-```
+### cpuinfo
+
+* 💻 Languages: Shell (53.2%), Make (46.8%)
+* 📚 Documentation: Text (100.0%)
+* 📊 Commits: 28
+* 📈 Lines of Code: 124
+* 📄 Lines of Documentation: 75
+* 📅 Development Period: 2010-11-05 to 2021-11-05
+* 🔥 Recent Activity: 4248.5 days (avg. age of last 42 commits)
+* ⚖️ License: No license found
+* 🏷️ Latest Release: 1.0.2 (2014-06-22)
+
+⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
+
+**cpuinfo** is a small command-line utility that provides a human-readable summary of CPU information on Linux systems. It parses `/proc/cpuinfo` using AWK to extract and display key processor details including the CPU model, cache size, number of physical processors, cores, and whether hyper-threading is enabled. The tool calculates total CPU frequency and bogomips across all cores, making it easier to understand complex multi-core and multi-processor configurations at a glance.
+
+The implementation is remarkably simple - a single shell script that uses GNU AWK to parse the kernel's CPU information and format it into a clear, structured output. It's particularly useful for system administrators and developers who need to quickly understand CPU topology, especially on servers with multiple processors or complex threading configurations where the raw `/proc/cpuinfo` output can be overwhelming.
+
+=> https://codeberg.org/snonux/cpuinfo View on Codeberg
+=> https://github.com/snonux/cpuinfo View on GitHub
---
@@ -1858,7 +1825,7 @@ use constant {
* 📊 Commits: 110
* 📈 Lines of Code: 614
* 📅 Development Period: 2011-02-05 to 2022-04-21
-* 🔥 Recent Activity: 4327.6 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4328.1 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v1.4 (2022-04-29)
@@ -1871,7 +1838,7 @@ The architecture centers around a modular plugin system where custom functionali
=> https://codeberg.org/snonux/perldaemon View on Codeberg
=> https://github.com/snonux/perldaemon View on GitHub
-Perl from `lib/PerlDaemonModules/ExampleModule.pm`:
+Perl from `lib/PerlDaemonModules/ExampleModule2.pm`:
```AUTO
sub new ($$$) {
@@ -1894,7 +1861,7 @@ sub new ($$$) {
* 📈 Lines of Code: 122
* 📄 Lines of Documentation: 10
* 📅 Development Period: 2011-01-27 to 2014-06-22
-* 🔥 Recent Activity: 4658.6 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4659.1 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: v0.2 (2011-01-27)
@@ -1939,7 +1906,7 @@ function read_config_values(config_file) {
* 📈 Lines of Code: 720
* 📄 Lines of Documentation: 6
* 📅 Development Period: 2008-06-21 to 2021-11-03
-* 🔥 Recent Activity: 4721.2 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 4721.8 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v0.3 (2009-02-08)
@@ -1993,7 +1960,7 @@ public SPrefs(Component parent, HashMap<String,String> options) {
* 📈 Lines of Code: 17380
* 📄 Lines of Documentation: 947
* 📅 Development Period: 2009-02-07 to 2021-05-01
-* 🔥 Recent Activity: 5351.9 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 5352.5 days (avg. age of last 42 commits)
* ⚖️ License: GPL-2.0
* 🏷️ Latest Release: v0.1 (2009-02-08)
@@ -2010,34 +1977,26 @@ The implementation uses a clean separation of concerns with dedicated packages f
=> https://codeberg.org/snonux/netcalendar View on Codeberg
=> https://github.com/snonux/netcalendar View on GitHub
-Java from `sources/client/JCalendarDatePicker.java`:
+Java from `sources/client/inputforms/CreateNewEvent.java`:
```AUTO
-private JCalendar jcalendar;
-private Calendar calendar;
-private CalendarEvent calendarEvent;
+private final static long serialVersionUID = 1L;
-public JCalendarDatePicker(NetCalendarClient netCalendarClient) {
- super("Calendar", netCalendarClient);
-
- initComponents();
- setResizable(false);
- pack();
- setVisible(true);
-}
+private final static String[] labels =
+ { "Description: ", "Category: ", "Place: ", "Yearly: ", "Date: "};
```
---
### ychat
-* 💻 Languages: C++ (51.1%), C/C++ (29.9%), Shell (15.9%), HTML (1.4%), Perl (1.2%), Make (0.4%), CSS (0.1%)
+* 💻 Languages: C++ (54.9%), C/C++ (23.0%), Shell (13.8%), Perl (2.5%), HTML (2.5%), Config (2.3%), Make (0.8%), CSS (0.2%)
* 📚 Documentation: Text (100.0%)
* 📊 Commits: 67
-* 📈 Lines of Code: 9958
-* 📄 Lines of Documentation: 103
-* 📅 Development Period: 2008-05-15 to 2014-07-01
-* 🔥 Recent Activity: 5381.5 days (avg. age of last 42 commits)
+* 📈 Lines of Code: 67884
+* 📄 Lines of Documentation: 127
+* 📅 Development Period: 2008-05-15 to 2014-06-30
+* 🔥 Recent Activity: 5372.7 days (avg. age of last 42 commits)
* ⚖️ License: GPL-2.0
* 🏷️ Latest Release: yhttpd-0.7.2 (2013-04-06)
@@ -2052,18 +2011,6 @@ The architecture is built around several key managers: a socket manager for hand
=> https://codeberg.org/snonux/ychat View on Codeberg
=> https://github.com/snonux/ychat View on GitHub
-C++ from `room.cpp`:
-
-```AUTO
-#define ROOM_CXX
-
-#include "room.h"
-
-using namespace std;
-
-room::room( string s_name ) : name( s_name )
-```
-
---
### hsbot
@@ -2072,7 +2019,7 @@ room::room( string s_name ) : name( s_name )
* 📊 Commits: 80
* 📈 Lines of Code: 601
* 📅 Development Period: 2009-11-22 to 2011-10-17
-* 🔥 Recent Activity: 5447.6 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 5448.1 days (avg. age of last 42 commits)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -2085,20 +2032,43 @@ The implementation uses a clean separation of concerns with modules for IRC conn
=> https://codeberg.org/snonux/hsbot View on Codeberg
=> https://github.com/snonux/hsbot View on GitHub
-Haskell from `HsBot/Base/Cmd.hs`:
+Haskell from `HsBot/Plugins/PrintMessages.hs`:
```AUTO
-module HsBot.Base.Cmd where
+module HsBot.Plugins.PrintMessages (makePrintMessages) where
+import HsBot.Plugins.Base
+
+import HsBot.Base.Env
import HsBot.Base.State
-data Cmd = Cmd String String (State -> IO ())
+printMessages :: CallbackFunction
+printMessages str sendMessage env@(Env state _) = do
+ putStrLn $ (currentChannel state) ++ " "
+```
-instance Show Cmd where
- show (Cmd a b _) = a ++ " - " ++ b
+---
-cmdGet :: String -> [Cmd] -> Maybe Cmd
-```
+### fype
+
+* 💻 Languages: C (72.1%), C/C++ (20.7%), HTML (5.7%), Make (1.5%)
+* 📚 Documentation: Text (71.3%), LaTeX (28.7%)
+* 📊 Commits: 99
+* 📈 Lines of Code: 10196
+* 📄 Lines of Documentation: 1741
+* 📅 Development Period: 2008-05-15 to 2021-11-03
+* 🔥 Recent Activity: 5609.9 days (avg. age of last 42 commits)
+* ⚖️ License: Custom License
+* 🧪 Status: Experimental (no releases yet)
+
+⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
+
+**Fype** is a 32-bit scripting language interpreter written in C that aims to be "at least as good as AWK" while providing a different syntax and some unique features. Created by Paul C. Buetow as a fun project, Fype supports variables, functions, procedures, loops, arrays, and control structures with features like variable synonyms (references), nested functions/procedures, and automatic type conversion. The language uses a simple syntax with statements ending in semicolons and supports both global procedures (which share scope with their callers) and lexically-scoped functions.
+
+The implementation is built using a straightforward top-down parser with a maximum lookahead of 1 token, simultaneously parsing and interpreting code (meaning syntax errors are only detected at runtime). The architecture is modular with separate components for scanning/tokenization, symbol management, garbage collection, type conversion, and data structures (including arrays, lists, hash tables, stacks, and trees). The interpreter is designed for Unix-like systems (BSD/Linux) and includes built-in functions for I/O, math operations, bitwise operations, system calls like `fork`, and memory management with garbage collection.
+
+=> https://codeberg.org/snonux/fype View on Codeberg
+=> https://github.com/snonux/fype View on GitHub
---
@@ -2109,7 +2079,7 @@ cmdGet :: String -> [Cmd] -> Maybe Cmd
* 📈 Lines of Code: 0
* 📄 Lines of Documentation: 7
* 📅 Development Period: 2008-05-15 to 2015-05-23
-* 🔥 Recent Activity: 5808.5 days (avg. age of last 42 commits)
+* 🔥 Recent Activity: 5809.1 days (avg. age of last 42 commits)
* ⚖️ License: No license found
* 🏷️ Latest Release: v1.0 (2008-08-24)
@@ -2121,36 +2091,3 @@ The implementation features a modular architecture with separate packages for co
=> https://codeberg.org/snonux/vs-sim View on Codeberg
=> https://github.com/snonux/vs-sim View on GitHub
-
----
-
-### fype
-
-* 💻 Languages: C (71.2%), C/C++ (20.7%), HTML (6.6%), Make (1.5%)
-* 📚 Documentation: Text (60.3%), LaTeX (39.7%)
-* 📊 Commits: 99
-* 📈 Lines of Code: 8954
-* 📄 Lines of Documentation: 1432
-* 📅 Development Period: 2008-05-15 to 2014-06-30
-* 🔥 Recent Activity: 5834.2 days (avg. age of last 42 commits)
-* ⚖️ License: Custom License
-* 🧪 Status: Experimental (no releases yet)
-
-⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
-
-**Fype** is a 32-bit scripting language interpreter written in C that aims to be "at least as good as AWK" while providing a different syntax and some unique features. Created by Paul C. Buetow as a fun project, Fype supports variables, functions, procedures, loops, arrays, and control structures with features like variable synonyms (references), nested functions/procedures, and automatic type conversion. The language uses a simple syntax with statements ending in semicolons and supports both global procedures (which share scope with their callers) and lexically-scoped functions.
-
-The implementation is built using a straightforward top-down parser with a maximum lookahead of 1 token, simultaneously parsing and interpreting code (meaning syntax errors are only detected at runtime). The architecture is modular with separate components for scanning/tokenization, symbol management, garbage collection, type conversion, and data structures (including arrays, lists, hash tables, stacks, and trees). The interpreter is designed for Unix-like systems (BSD/Linux) and includes built-in functions for I/O, math operations, bitwise operations, system calls like `fork`, and memory management with garbage collection.
-
-=> https://codeberg.org/snonux/fype View on Codeberg
-=> https://github.com/snonux/fype View on GitHub
-
-C from `src/data/queue.h`:
-
-```AUTO
-typedef struct {
- unsigned i_left;
- Queue *p_queue;
- QueueElem *p_current;
- QueueElem *p_next;
-```
diff --git a/about/showcase/debroid/image-1.png b/about/showcase/debroid/image-1.png
index ca6100d2..0c815b21 100644
--- a/about/showcase/debroid/image-1.png
+++ b/about/showcase/debroid/image-1.png
@@ -83,13 +83,13 @@
<meta name="route-pattern" content="/:user_id/:repository/blob/*name(/*path)" data-turbo-transient>
<meta name="route-controller" content="blob" data-turbo-transient>
<meta name="route-action" content="show" data-turbo-transient>
- <meta name="fetch-nonce" content="v2:b4cb7db7-8b7e-2041-ea1b-4f520ff2d5c4">
+ <meta name="fetch-nonce" content="v2:1730518a-a2e5-31ab-61d3-b37e74e10408">
<meta name="current-catalog-service-hash" content="f3abb0cc802f3d7b95fc8762b94bdcb13bf39634c40c357301c4aa1d67a256fb">
- <meta name="request-id" content="EC08:80B35:24EA3F6:2651DF7:6871F7A4" data-pjax-transient="true"/><meta name="html-safe-nonce" content="5db1004ab9904461875bbc0e439163135d3e5ec4dc598ab5dde9463256596620" data-pjax-transient="true"/><meta name="visitor-payload" content="eyJyZWZlcnJlciI6IiIsInJlcXVlc3RfaWQiOiJFQzA4OjgwQjM1OjI0RUEzRjY6MjY1MURGNzo2ODcxRjdBNCIsInZpc2l0b3JfaWQiOiIxNzU4NDQ1NjYyNDA0NDc0Nzg4IiwicmVnaW9uX2VkZ2UiOiJmcmEiLCJyZWdpb25fcmVuZGVyIjoiZnJhIn0=" data-pjax-transient="true"/><meta name="visitor-hmac" content="a38559c1a922e444e5bea821fde4b7b098aff7684d240484a570f291b81659e6" data-pjax-transient="true"/>
+ <meta name="request-id" content="CBD2:242A09:203E4D0:214A2FD:6872AE70" data-pjax-transient="true"/><meta name="html-safe-nonce" content="0f2df0f27ffe35787e0830207c863347eb00c3c3c9732391e2f1e98773323539" data-pjax-transient="true"/><meta name="visitor-payload" content="eyJyZWZlcnJlciI6IiIsInJlcXVlc3RfaWQiOiJDQkQyOjI0MkEwOToyMDNFNEQwOjIxNEEyRkQ6Njg3MkFFNzAiLCJ2aXNpdG9yX2lkIjoiNjkxNzg1OTY4NjA0MDY0NTIzMiIsInJlZ2lvbl9lZGdlIjoiZnJhIiwicmVnaW9uX3JlbmRlciI6ImZyYSJ9" data-pjax-transient="true"/><meta name="visitor-hmac" content="1afb74c872b30ea73b23be7aa74ddc6f4dc44b29d1ba9c13a95a7e485cf5095f" data-pjax-transient="true"/>
@@ -292,10 +292,10 @@
</a>
<div class="AppHeader-appearanceSettings">
<react-partial-anchor>
- <button data-target="react-partial-anchor.anchor" id="icon-button-025476cc-1a71-4151-b9ea-97848ead3a03" aria-labelledby="tooltip-c036a757-21af-457a-ae2a-dfbc11fb68f5" type="button" disabled="disabled" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium AppHeader-button HeaderMenu-link border cursor-wait"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-sliders Button-visual">
+ <button data-target="react-partial-anchor.anchor" id="icon-button-25b5ec3d-667e-479f-a4a4-1c8f680603af" aria-labelledby="tooltip-5a47bd7d-a89d-439a-aa18-3f2bbddf357d" type="button" disabled="disabled" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium AppHeader-button HeaderMenu-link border cursor-wait"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-sliders Button-visual">
<path d="M15 2.75a.75.75 0 0 1-.75.75h-4a.75.75 0 0 1 0-1.5h4a.75.75 0 0 1 .75.75Zm-8.5.75v1.25a.75.75 0 0 0 1.5 0v-4a.75.75 0 0 0-1.5 0V2H1.75a.75.75 0 0 0 0 1.5H6.5Zm1.25 5.25a.75.75 0 0 0 0-1.5h-6a.75.75 0 0 0 0 1.5h6ZM15 8a.75.75 0 0 1-.75.75H11.5V10a.75.75 0 1 1-1.5 0V6a.75.75 0 0 1 1.5 0v1.25h2.75A.75.75 0 0 1 15 8Zm-9 5.25v-2a.75.75 0 0 0-1.5 0v1.25H1.75a.75.75 0 0 0 0 1.5H4.5v1.25a.75.75 0 0 0 1.5 0v-2Zm9 0a.75.75 0 0 1-.75.75h-6a.75.75 0 0 1 0-1.5h6a.75.75 0 0 1 .75.75Z"></path>
</svg>
-</button><tool-tip id="tooltip-c036a757-21af-457a-ae2a-dfbc11fb68f5" for="icon-button-025476cc-1a71-4151-b9ea-97848ead3a03" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Appearance settings</tool-tip>
+</button><tool-tip id="tooltip-5a47bd7d-a89d-439a-aa18-3f2bbddf357d" for="icon-button-25b5ec3d-667e-479f-a4a4-1c8f680603af" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Appearance settings</tool-tip>
<template data-target="react-partial-anchor.template">
<script crossorigin="anonymous" type="application/javascript" src="https://github.githubassets.com/assets/ui_packages_document-metadata_document-metadata_ts-ui_packages_promise-with-resolvers-polyfil-40d47c-2b0274d4149e.js" defer="defer"></script>
@@ -969,7 +969,7 @@
-<qbsearch-input class="search-input" data-scope="owner:buetow" data-custom-scopes-path="/search/custom_scopes" data-delete-custom-scopes-csrf="GAUHYUVxIpTsGZuJHPW93zLcna1MoT7dhfx1g2bpiYmO2EHrUafYzLqWJdjBV5gi4JVhRNpmaFgWtqwDBUdy7Q" data-max-custom-scopes="10" data-header-redesign-enabled="false" data-initial-value="" data-blackbird-suggestions-path="/search/suggestions" data-jump-to-suggestions-path="/_graphql/GetSuggestedNavigationDestinations" data-current-repository="" data-current-org="" data-current-owner="" data-logged-in="false" data-copilot-chat-enabled="false" data-nl-search-enabled="false" data-retain-scroll-position="true">
+<qbsearch-input class="search-input" data-scope="owner:buetow" data-custom-scopes-path="/search/custom_scopes" data-delete-custom-scopes-csrf="WtWe7z7C6XpNIj_qdmzdJhXZlZUg5S9K-qWb1dbEBh4o96dwyCVLcSksJwY9M0jpU7Vy_ZQKUiMqZGTsSLmRVQ" data-max-custom-scopes="10" data-header-redesign-enabled="false" data-initial-value="" data-blackbird-suggestions-path="/search/suggestions" data-jump-to-suggestions-path="/_graphql/GetSuggestedNavigationDestinations" data-current-repository="" data-current-org="" data-current-owner="" data-logged-in="false" data-copilot-chat-enabled="false" data-nl-search-enabled="false" data-retain-scroll-position="true">
<div
class="search-input-container search-with-dialog position-relative d-flex flex-row flex-items-center mr-4 rounded"
data-action="click:qbsearch-input#searchInputContainerClicked"
@@ -1033,7 +1033,7 @@
></div>
<div class="QueryBuilder-InputWrapper">
<div aria-hidden="true" class="QueryBuilder-Sizer" data-target="query-builder.sizer"></div>
- <input id="query-builder-test" name="query-builder-test" value="" autocomplete="off" type="text" role="combobox" spellcheck="false" aria-expanded="false" aria-describedby="validation-9988803f-59be-4b18-86b4-efde97f096c9" data-target="query-builder.input" data-action="
+ <input id="query-builder-test" name="query-builder-test" value="" autocomplete="off" type="text" role="combobox" spellcheck="false" aria-expanded="false" aria-describedby="validation-ddd607b1-8b6c-4cd1-a08c-2973f0aa91fd" data-target="query-builder.input" data-action="
input:query-builder#inputChange
blur:query-builder#inputBlur
keydown:query-builder#inputKeydown
@@ -1271,7 +1271,7 @@
tabindex="-1"
></ul>
</div>
- <div class="FormControl-inlineValidation" id="validation-9988803f-59be-4b18-86b4-efde97f096c9" hidden="hidden">
+ <div class="FormControl-inlineValidation" id="validation-ddd607b1-8b6c-4cd1-a08c-2973f0aa91fd" hidden="hidden">
<span class="FormControl-inlineValidation--visual">
<svg aria-hidden="true" height="12" viewBox="0 0 12 12" version="1.1" width="12" data-view-component="true" class="octicon octicon-alert-fill">
<path d="M4.855.708c.5-.896 1.79-.896 2.29 0l4.675 8.351a1.312 1.312 0 0 1-1.146 1.954H1.33A1.313 1.313 0 0 1 .183 9.058ZM7 7V3H5v4Zm-1 3a1 1 0 1 0 0-2 1 1 0 0 0 0 2Z"></path>
@@ -1312,7 +1312,7 @@
</div>
<scrollable-region data-labelled-by="feedback-dialog-title">
- <div data-view-component="true" class="Overlay-body"> <!-- '"` --><!-- </textarea></xmp> --></option></form><form id="code-search-feedback-form" data-turbo="false" action="/search/feedback" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="QYyUoyNmwU4MLII6hJWIMJA7URf7pfPlWg4zbxzvYEOEg0NkUb+Sof1CcZUT7C21PlGQC8gtVyww2BVo+2vHMg==" />
+ <div data-view-component="true" class="Overlay-body"> <!-- '"` --><!-- </textarea></xmp> --></option></form><form id="code-search-feedback-form" data-turbo="false" action="/search/feedback" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="1rx1lv6/D69c8auozj6rRVa56MdbYuylfeSOKdRSm+xzrZy10gcaZMZDJOci2BxUlU796osNcKl9n3RYnk2nRQ==" />
<p>We read every piece of feedback, and take your input very seriously.</p>
<textarea name="feedback" class="form-control width-full mb-2" style="height: 120px" id="feedback"></textarea>
<input name="include_email" id="include_email" aria-label="Include my email address so I can be contacted" class="form-control mr-2" type="checkbox">
@@ -1350,7 +1350,7 @@
<div data-view-component="true" class="Overlay-body"> <div data-target="custom-scopes.customScopesModalDialogFlash"></div>
<div hidden class="create-custom-scope-form" data-target="custom-scopes.createCustomScopeForm">
- <!-- '"` --><!-- </textarea></xmp> --></option></form><form id="custom-scopes-dialog-form" data-turbo="false" action="/search/custom_scopes" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="BDu+fvrWL6eF4jshj8Ufy9ZQT1iZR7nFfmcyi/xjDN7qNaX3Do0W+vVGue1vzeO3I96HZu6m1ETBLIzjGig85A==" />
+ <!-- '"` --><!-- </textarea></xmp> --></option></form><form id="custom-scopes-dialog-form" data-turbo="false" action="/search/custom_scopes" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="HDSCx5KDuLqCGewE0DlGCtoHnrn62pBzwkah1ITamqGoEhAQIESdJlrTdLadvviac5/C51FudkSvX5em0eXTog==" />
<div data-target="custom-scopes.customScopesModalDialogFlash"></div>
<input type="hidden" id="custom_scope_id" name="custom_scope_id" data-target="custom-scopes.customScopesIdField">
@@ -1368,7 +1368,7 @@
placeholder="github-ruby"
required
maxlength="50">
- <input type="hidden" data-csrf="true" value="gA9XJsKgJHec0aLemOvwR+YmINKySw/PTji87KT0M0+C66uDN3M4U73fEH7XBf4CFDpO3CUO0qHf0Xj+Z8CFXA==" />
+ <input type="hidden" data-csrf="true" value="gdXeT3kWAV5PEBlfDBCz5lh7+5NLuuWpMLtUeJZII945XNIUvLMf9ivHwbnn0YxS867NKVR4AWeRFn/rTD4lGQ==" />
</auto-check>
</div>
@@ -1423,7 +1423,7 @@
<h4 data-view-component="true" class="color-fg-default mb-2"> Sign in to GitHub
</h4>
-<!-- '"` --><!-- </textarea></xmp> --></option></form><form data-turbo="false" action="/session" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="CJth7AVfbZKpK5dTdQhg8oE/S2CRvNtaKmNHBjMX141cF4O9IFuSA2Dw+p3d6RCPxWq6U7k5J0tkqkdR3AHnzg==" /> <input type="hidden" name="add_account" id="add_account" autocomplete="off" class="form-control" />
+<!-- '"` --><!-- </textarea></xmp> --></option></form><form data-turbo="false" action="/session" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="N7Yae/V1PAGRy3cnCgpBeVnh5XCqCa5I6obfD45P+MD0NkvEbegIgFeVw6x/JsvmZ6cPgKJGww/8QfRX25VQiQ==" /> <input type="hidden" name="add_account" id="add_account" autocomplete="off" class="form-control" />
<label for="login_field">
Username or email address
@@ -1445,9 +1445,9 @@
<input type="hidden" name="allow_signup" id="allow_signup" autocomplete="off" class="form-control" />
<input type="hidden" name="client_id" id="client_id" autocomplete="off" class="form-control" />
<input type="hidden" name="integration" id="integration" autocomplete="off" class="form-control" />
-<input class="form-control" type="text" name="required_field_dcf6" hidden="hidden" />
-<input class="form-control" type="hidden" name="timestamp" value="1752299428754" />
-<input class="form-control" type="hidden" name="timestamp_secret" value="04c4f6978be0dab12bf8affbfa6baafcb65313dec57ecd76e770dc900af98c8a" />
+<input class="form-control" type="text" name="required_field_277c" hidden="hidden" />
+<input class="form-control" type="hidden" name="timestamp" value="1752346224776" />
+<input class="form-control" type="hidden" name="timestamp_secret" value="a397aa46524391679fcf89ae2ed3b9c463ba0adcaf61b83974b8d695bf2bb69a" />
<input type="submit" name="commit" value="Sign in" class="btn btn-primary btn-block js-sign-in-button" data-disable-with="Signing in…" data-signin-label="Sign in" data-sso-label="Sign in with your identity provider" development="false" disable-emu-sso="false" />
@@ -1474,10 +1474,10 @@
<div class="AppHeader-appearanceSettings">
<react-partial-anchor>
- <button data-target="react-partial-anchor.anchor" id="icon-button-09d26d9c-3ad2-4593-ac4f-3e8c0bd77177" aria-labelledby="tooltip-2ba8e40d-7e86-4dc2-91ea-da13264c6e62" type="button" disabled="disabled" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium AppHeader-button HeaderMenu-link border cursor-wait"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-sliders Button-visual">
+ <button data-target="react-partial-anchor.anchor" id="icon-button-fbf10d5d-0884-4a34-84b4-4eceefc874fb" aria-labelledby="tooltip-6f4b3d50-04ee-4108-8b00-bf62cabfa3d4" type="button" disabled="disabled" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium AppHeader-button HeaderMenu-link border cursor-wait"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-sliders Button-visual">
<path d="M15 2.75a.75.75 0 0 1-.75.75h-4a.75.75 0 0 1 0-1.5h4a.75.75 0 0 1 .75.75Zm-8.5.75v1.25a.75.75 0 0 0 1.5 0v-4a.75.75 0 0 0-1.5 0V2H1.75a.75.75 0 0 0 0 1.5H6.5Zm1.25 5.25a.75.75 0 0 0 0-1.5h-6a.75.75 0 0 0 0 1.5h6ZM15 8a.75.75 0 0 1-.75.75H11.5V10a.75.75 0 1 1-1.5 0V6a.75.75 0 0 1 1.5 0v1.25h2.75A.75.75 0 0 1 15 8Zm-9 5.25v-2a.75.75 0 0 0-1.5 0v1.25H1.75a.75.75 0 0 0 0 1.5H4.5v1.25a.75.75 0 0 0 1.5 0v-2Zm9 0a.75.75 0 0 1-.75.75h-6a.75.75 0 0 1 0-1.5h6a.75.75 0 0 1 .75.75Z"></path>
</svg>
-</button><tool-tip id="tooltip-2ba8e40d-7e86-4dc2-91ea-da13264c6e62" for="icon-button-09d26d9c-3ad2-4593-ac4f-3e8c0bd77177" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Appearance settings</tool-tip>
+</button><tool-tip id="tooltip-6f4b3d50-04ee-4108-8b00-bf62cabfa3d4" for="icon-button-fbf10d5d-0884-4a34-84b4-4eceefc874fb" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Appearance settings</tool-tip>
<template data-target="react-partial-anchor.template">
<link crossorigin="anonymous" media="all" rel="stylesheet" href="https://github.githubassets.com/assets/primer-react.cbbd4414f8577721e220.module.css" />
@@ -1514,10 +1514,10 @@
<span class="js-stale-session-flash-signed-out" hidden>You signed out in another tab or window. <a class="Link--inTextBlock" href="">Reload</a> to refresh your session.</span>
<span class="js-stale-session-flash-switched" hidden>You switched accounts on another tab or window. <a class="Link--inTextBlock" href="">Reload</a> to refresh your session.</span>
- <button id="icon-button-22d3a927-8bb2-46a2-b73c-915fc6eebb41" aria-labelledby="tooltip-f30089b0-3f4a-4c1c-b7fc-c561f17d77c4" type="button" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium flash-close js-flash-close"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-x Button-visual">
+ <button id="icon-button-3f878640-ddbe-4490-ba94-6989c326a65e" aria-labelledby="tooltip-968fe291-901b-4e5a-8b0b-56256708cfa7" type="button" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium flash-close js-flash-close"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-x Button-visual">
<path d="M3.72 3.72a.75.75 0 0 1 1.06 0L8 6.94l3.22-3.22a.749.749 0 0 1 1.275.326.749.749 0 0 1-.215.734L9.06 8l3.22 3.22a.749.749 0 0 1-.326 1.275.749.749 0 0 1-.734-.215L8 9.06l-3.22 3.22a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042L6.94 8 3.72 4.78a.75.75 0 0 1 0-1.06Z"></path>
</svg>
-</button><tool-tip id="tooltip-f30089b0-3f4a-4c1c-b7fc-c561f17d77c4" for="icon-button-22d3a927-8bb2-46a2-b73c-915fc6eebb41" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Dismiss alert</tool-tip>
+</button><tool-tip id="tooltip-968fe291-901b-4e5a-8b0b-56256708cfa7" for="icon-button-3f878640-ddbe-4490-ba94-6989c326a65e" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Dismiss alert</tool-tip>
diff --git a/gemfeed/2025-06-22-task-samurai.gmi b/gemfeed/2025-06-22-task-samurai.gmi
index 8e353242..503f0106 100644
--- a/gemfeed/2025-06-22-task-samurai.gmi
+++ b/gemfeed/2025-06-22-task-samurai.gmi
@@ -13,7 +13,7 @@
* ⇢ ⇢ Where and how to get it
* ⇢ ⇢ Lessons learned from building Task Samurai with agentic coding
* ⇢ ⇢ ⇢ Developer workflow
-* ⇢ ⇢ ⇢ How it went down
+* ⇢ ⇢ ⇢ How it went
* ⇢ ⇢ ⇢ What went wrong
* ⇢ ⇢ ⇢ Patterns that helped
* ⇢ ⇢ ⇢ What I learned using agentic coding
@@ -30,6 +30,7 @@ Task Samurai is a fast terminal interface for Taskwarrior written in Go using th
### Why does this exist?
I wanted to tinker with agentic coding. This project was implemented entirely using OpenAI Codex. (After this blog post was published, I also used the Claude Code CLI.)
+
* I wanted a faster UI for Taskwarrior than other options, like Vit, which is Python-based.
* I wanted something built with Bubble Tea, but I never had time to dive deep into it.
* I wanted to build a toy project (like Task Samurai) first, before tackling the big ones, to get started with agentic coding.
@@ -56,17 +57,19 @@ And follow the `README.md`!
### Developer workflow
-I was trying out OpenAI Codex because I regularly run out of Claude Code CLI (another agentic coding tool I am trying out currently) credits (it still happens!), but Codex was still available to me. So, I seized the opportunity to push agentic coding a bit more using another platform.
+I was trying out OpenAI Codex because I regularly run out of Claude Code CLI (another agentic coding tool I am currently trying out) credits (it still happens!), but Codex was still available to me. So, I took the opportunity to push agentic coding a bit further with another platform.
I didn't really love the web UI you have to use for Codex, as I usually live in the terminal. But this is all I have for Codex for now, and I thought I'd give it a try regardless. The web UI is simple and pretty straightforward. There's also a Codex CLI one could use directly in the terminal, but I didn't get it working. I will try again soon.
+> Update: Codex CLI now works for me, after OpenAI released a new version!
+
For every task given to Codex, it spins up its own container. From there, you can drill down and watch what it is doing. At the end, the result (in the form of a code diff) will be presented. From there, you can make suggestions about what else to change in the codebase. What I found inconvenient is that for every additional change, there's an overhead because Codex has to spin up a container and bootstrap the entire development environment again, which adds extra delay. That could be eliminated by setting up predefined custom containers, but that feature still seems somewhat limited.
-Once satisfied, you can ask Codex to create a GitHub PR; from there, you can merge it and then pull it to your local laptop or workstation to test the changes again. I found myself looping a lot around the Codex UI, GitHub PRs, and local checkouts.
+Once satisfied, you can ask Codex to create a GitHub PR (too bad only GitHub is supported and no other Git hosters); from there, you can merge it and then pull it to your local laptop or workstation to test the changes again. I found myself looping a lot around the Codex UI, GitHub PRs, and local checkouts.
-### How it went down
+### How it went
-Task Samurai's codebase came together quickly: the entire Git history spans from June 19 to 22, 2025, culminating in 179 commits. Here are the broad strokes:
+Task Samurai's codebase came together quickly: the entire Git history spans from June 19 to 22, 2025, culminating in 179 commits:
* June 19: Scaffolded the Go boilerplate, set up tests, integrated the Bubble Tea UI framework, and got the first table views showing up.
* June 20: (The big one—120 commits!) Added hotkeys, colourized tasks, annotation support, undo/redo, and, for fun, fireworks on quit (which never worked and got removed at a later point). This is where most of the bugs, merges, and fast-paced changes happen.
@@ -79,7 +82,7 @@ It's worth noting that I worked on it in the evenings when I had some free time,
### What went wrong
-Going agentic isn't all smooth sailing. Here are the hiccups I ran into, plus a few hard-earned lessons:
+Going agentic isn't all smooth. Here are the hiccups I ran into, plus a few lessons:
* Merge Floods: Every minor feature or fix existed on its branch, so merging was a constant process. It kept progress flowing but also drowned the committed history in noise and the occasional conflict. I found this to be an issue with OpenAI's Codex in particular. Not so much with other agentic coding tools like Claude Code CLI (not covered in this blog post.)
* Fixes on fixes: Features like "fireworks on exit" had chains of "fix exit," "fix cell selection," etc. Sometimes, new additions introduced bugs that needed rapid patching.
@@ -92,29 +95,28 @@ Despite the chaos, a few strategies kept things moving:
* Tiny PRs: Small, atomic merges meant feedback came fast (and so did fixes).
* Tests Matter: A solid base of unit tests for task manipulations kept things from breaking entirely when experimenting.
* Live Documentation: Documentation, such as the README, is updated regularly to reflect all the hotkey and feature changes.
+
Maybe a better approach would have been to design the whole application from scratch before letting Codix do any of the coding. I will try that with my next toy project.
### What I learned using agentic coding
-Stepping into agentic coding with Codex as my "pair programmer" was a genuine shift. I learned a lot—not just about automating code generation, but also about how you have to tightly steer, guide, and audit every line as things move at breakneck speed. I must admit, I sometimes lost track of what all the generated code was actually doing. But as the features seemed to work after a few iterations, I was satisfied—which is a bit concerning. Imagine if I approved a PR for a production-grade deployment without fully understanding what it was doing (and not a toy project like in this post).
-
-Discussing requirements with Codex forced me to clarify features and spot logical pitfalls earlier. All those fast iterations meant I was constantly coaxing more helpful, less ambiguous code out of the model—making me rethink how to break features into clear, testable steps.
+Stepping into agentic coding with Codex as my "pair programmer" was a big shift. I learned a lot—not just about automating code generation, but also about how you have to tightly steer, guide, and audit every line as things move at high speed. I must admit, I sometimes lost track of what all the generated code was actually doing. But as the features seemed to work after a few iterations, I was satisfied—which is a bit concerning. Imagine if I approved a PR for a production-grade deployment without fully understanding what it was doing (and not a toy project like in this post).
### how much time did I save?
-Did it buy me speed? Let's do some back-of-the-envelope math:
+Did it buy me speed?
* Say each commit takes Codex 5 minutes to generate, and you need to review/guide 179 commits = about _6 hours of active development_.
* If you coded it all yourself, including all the bug fixes, features, design, and documentation, you might spend _10–20 hours_.
-* That's a couple of days potential savings.
+* That's a couple of days of potential savings—and I am by no means an expert in agentic coding, since this was my first completed agentic coding project.
## Conclusion
-Building Task Samurai with agentic coding was a wild ride—rapid feature growth, plenty of churns, countless fast fixes, and more merge commits I'd expected. Keep the iterations short (or maybe in my next experiment, much larger, with better and more complete design before generating a single line of code), keep tests and documentation concise, and review and refine for final polish at the end. Even with the bumps along the way, shipping a polished terminal UI in days instead of weeks is a testament to the raw power (and some hazards) of agentic development.
+Building Task Samurai with agentic coding was a wild ride—rapid feature growth, countless fast fixes, and more merge commits I'd expected. Keep the iterations short (or maybe in my next experiment, much larger, with better and more complete design before generating a single line of code), keep tests and documentation concise, and review and refine for final polish at the end. Even with the bumps along the way, shipping a polished terminal UI in days instead of weeks is a testament to the power of agentic development.
Am I an agentic coding expert now? I don't think so. There are still many things to learn, and the landscape is constantly evolving.
-While working on Task Samurai, there were times I genuinely missed manual coding and the satisfaction that comes from writing every line yourself, debugging issues manually, and crafting solutions from scratch. However, this is the direction in which the industry seems to be shifting, unfortunately. If applied correctly, AI will boost performance, and if you don't use AI, your next performance review may be awkward.
+While working on Task Samurai, there were times I missed manual coding and the satisfaction that comes from writing every line yourself, debugging issues manually, and crafting solutions from scratch. However, this is the direction in which the industry seems to be shifting, unfortunately. If applied correctly, AI will boost performance, and if you don't use AI, your next performance review may be awkward.
Personally, I am not sure whether I like where the industry is going with agentic coding. I love "traditional" coding, and with agentic coding you operate at a higher level and don't interact directly with code as often, which I would miss. I think that in the future, designing, reviewing, and being able to read and understand code will be more important than writing code by hand.
diff --git a/gemfeed/2025-06-22-task-samurai.gmi.tpl b/gemfeed/2025-06-22-task-samurai.gmi.tpl
index 59ccd54f..6b94be4d 100644
--- a/gemfeed/2025-06-22-task-samurai.gmi.tpl
+++ b/gemfeed/2025-06-22-task-samurai.gmi.tpl
@@ -16,6 +16,7 @@ Task Samurai is a fast terminal interface for Taskwarrior written in Go using th
### Why does this exist?
I wanted to tinker with agentic coding. This project was implemented entirely using OpenAI Codex. (After this blog post was published, I also used the Claude Code CLI.)
+
* I wanted a faster UI for Taskwarrior than other options, like Vit, which is Python-based.
* I wanted something built with Bubble Tea, but I never had time to dive deep into it.
* I wanted to build a toy project (like Task Samurai) first, before tackling the big ones, to get started with agentic coding.
@@ -42,17 +43,19 @@ And follow the `README.md`!
### Developer workflow
-I was trying out OpenAI Codex because I regularly run out of Claude Code CLI (another agentic coding tool I am trying out currently) credits (it still happens!), but Codex was still available to me. So, I seized the opportunity to push agentic coding a bit more using another platform.
+I was trying out OpenAI Codex because I regularly run out of Claude Code CLI (another agentic coding tool I am currently trying out) credits (it still happens!), but Codex was still available to me. So, I took the opportunity to push agentic coding a bit further with another platform.
I didn't really love the web UI you have to use for Codex, as I usually live in the terminal. But this is all I have for Codex for now, and I thought I'd give it a try regardless. The web UI is simple and pretty straightforward. There's also a Codex CLI one could use directly in the terminal, but I didn't get it working. I will try again soon.
+> Update: Codex CLI now works for me, after OpenAI released a new version!
+
For every task given to Codex, it spins up its own container. From there, you can drill down and watch what it is doing. At the end, the result (in the form of a code diff) will be presented. From there, you can make suggestions about what else to change in the codebase. What I found inconvenient is that for every additional change, there's an overhead because Codex has to spin up a container and bootstrap the entire development environment again, which adds extra delay. That could be eliminated by setting up predefined custom containers, but that feature still seems somewhat limited.
-Once satisfied, you can ask Codex to create a GitHub PR; from there, you can merge it and then pull it to your local laptop or workstation to test the changes again. I found myself looping a lot around the Codex UI, GitHub PRs, and local checkouts.
+Once satisfied, you can ask Codex to create a GitHub PR (too bad only GitHub is supported and no other Git hosters); from there, you can merge it and then pull it to your local laptop or workstation to test the changes again. I found myself looping a lot around the Codex UI, GitHub PRs, and local checkouts.
-### How it went down
+### How it went
-Task Samurai's codebase came together quickly: the entire Git history spans from June 19 to 22, 2025, culminating in 179 commits. Here are the broad strokes:
+Task Samurai's codebase came together quickly: the entire Git history spans from June 19 to 22, 2025, culminating in 179 commits:
* June 19: Scaffolded the Go boilerplate, set up tests, integrated the Bubble Tea UI framework, and got the first table views showing up.
* June 20: (The big one—120 commits!) Added hotkeys, colourized tasks, annotation support, undo/redo, and, for fun, fireworks on quit (which never worked and got removed at a later point). This is where most of the bugs, merges, and fast-paced changes happen.
@@ -65,7 +68,7 @@ It's worth noting that I worked on it in the evenings when I had some free time,
### What went wrong
-Going agentic isn't all smooth sailing. Here are the hiccups I ran into, plus a few hard-earned lessons:
+Going agentic isn't all smooth. Here are the hiccups I ran into, plus a few lessons:
* Merge Floods: Every minor feature or fix existed on its branch, so merging was a constant process. It kept progress flowing but also drowned the committed history in noise and the occasional conflict. I found this to be an issue with OpenAI's Codex in particular. Not so much with other agentic coding tools like Claude Code CLI (not covered in this blog post.)
* Fixes on fixes: Features like "fireworks on exit" had chains of "fix exit," "fix cell selection," etc. Sometimes, new additions introduced bugs that needed rapid patching.
@@ -78,29 +81,28 @@ Despite the chaos, a few strategies kept things moving:
* Tiny PRs: Small, atomic merges meant feedback came fast (and so did fixes).
* Tests Matter: A solid base of unit tests for task manipulations kept things from breaking entirely when experimenting.
* Live Documentation: Documentation, such as the README, is updated regularly to reflect all the hotkey and feature changes.
+
Maybe a better approach would have been to design the whole application from scratch before letting Codix do any of the coding. I will try that with my next toy project.
### What I learned using agentic coding
-Stepping into agentic coding with Codex as my "pair programmer" was a genuine shift. I learned a lot—not just about automating code generation, but also about how you have to tightly steer, guide, and audit every line as things move at breakneck speed. I must admit, I sometimes lost track of what all the generated code was actually doing. But as the features seemed to work after a few iterations, I was satisfied—which is a bit concerning. Imagine if I approved a PR for a production-grade deployment without fully understanding what it was doing (and not a toy project like in this post).
-
-Discussing requirements with Codex forced me to clarify features and spot logical pitfalls earlier. All those fast iterations meant I was constantly coaxing more helpful, less ambiguous code out of the model—making me rethink how to break features into clear, testable steps.
+Stepping into agentic coding with Codex as my "pair programmer" was a big shift. I learned a lot—not just about automating code generation, but also about how you have to tightly steer, guide, and audit every line as things move at high speed. I must admit, I sometimes lost track of what all the generated code was actually doing. But as the features seemed to work after a few iterations, I was satisfied—which is a bit concerning. Imagine if I approved a PR for a production-grade deployment without fully understanding what it was doing (and not a toy project like in this post).
### how much time did I save?
-Did it buy me speed? Let's do some back-of-the-envelope math:
+Did it buy me speed?
* Say each commit takes Codex 5 minutes to generate, and you need to review/guide 179 commits = about _6 hours of active development_.
* If you coded it all yourself, including all the bug fixes, features, design, and documentation, you might spend _10–20 hours_.
-* That's a couple of days potential savings.
+* That's a couple of days of potential savings—and I am by no means an expert in agentic coding, since this was my first completed agentic coding project.
## Conclusion
-Building Task Samurai with agentic coding was a wild ride—rapid feature growth, plenty of churns, countless fast fixes, and more merge commits I'd expected. Keep the iterations short (or maybe in my next experiment, much larger, with better and more complete design before generating a single line of code), keep tests and documentation concise, and review and refine for final polish at the end. Even with the bumps along the way, shipping a polished terminal UI in days instead of weeks is a testament to the raw power (and some hazards) of agentic development.
+Building Task Samurai with agentic coding was a wild ride—rapid feature growth, countless fast fixes, and more merge commits I'd expected. Keep the iterations short (or maybe in my next experiment, much larger, with better and more complete design before generating a single line of code), keep tests and documentation concise, and review and refine for final polish at the end. Even with the bumps along the way, shipping a polished terminal UI in days instead of weeks is a testament to the power of agentic development.
Am I an agentic coding expert now? I don't think so. There are still many things to learn, and the landscape is constantly evolving.
-While working on Task Samurai, there were times I genuinely missed manual coding and the satisfaction that comes from writing every line yourself, debugging issues manually, and crafting solutions from scratch. However, this is the direction in which the industry seems to be shifting, unfortunately. If applied correctly, AI will boost performance, and if you don't use AI, your next performance review may be awkward.
+While working on Task Samurai, there were times I missed manual coding and the satisfaction that comes from writing every line yourself, debugging issues manually, and crafting solutions from scratch. However, this is the direction in which the industry seems to be shifting, unfortunately. If applied correctly, AI will boost performance, and if you don't use AI, your next performance review may be awkward.
Personally, I am not sure whether I like where the industry is going with agentic coding. I love "traditional" coding, and with agentic coding you operate at a higher level and don't interact directly with code as often, which I would miss. I think that in the future, designing, reviewing, and being able to read and understand code will be more important than writing code by hand.
diff --git a/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-6.gmi b/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-6.gmi
index f0ed0bf5..df1d7b40 100644
--- a/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-6.gmi
+++ b/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-6.gmi
@@ -22,23 +22,17 @@ This is the sixth blog post about the f3s series for self-hosting demands in a h
* ⇢ ⇢ ⇢ Generating encryption keys
* ⇢ ⇢ ⇢ Configuring `zdata` ZFS pool encryption
* ⇢ ⇢ ⇢ Migrating Bhyve VMs to encrypted `bhyve` ZFS volume
-* ⇢ ⇢ CARP (Common Address Redundancy Protocol)
-* ⇢ ⇢ ⇢ How CARP Works
-* ⇢ ⇢ ⇢ Configuring CARP
-* ⇢ ⇢ ⇢ CARP State Change Notifications
* ⇢ ⇢ ZFS Replication with zrepl
* ⇢ ⇢ ⇢ Understanding Replication Requirements
-* ⇢ ⇢ ⇢ Why zrepl instead of HAST?
+* ⇢ ⇢ ⇢ Why `zrepl` instead of HAST?
* ⇢ ⇢ ⇢ Installing zrepl
* ⇢ ⇢ ⇢ Checking ZFS pools
-* ⇢ ⇢ ⇢ Configuring zrepl with WireGuard tunnel
-* ⇢ ⇢ ⇢ Configuring zrepl on f0 (source)
-* ⇢ ⇢ ⇢ Configuring zrepl on f1 (sink)
-* ⇢ ⇢ ⇢ Enabling and starting zrepl services
+* ⇢ ⇢ ⇢ Configuring `zrepl` with WireGuard tunnel
+* ⇢ ⇢ ⇢ Configuring `zrepl` on f0 (source)
+* ⇢ ⇢ ⇢ Configuring `zrepl` on `f1` (sink)
+* ⇢ ⇢ ⇢ Enabling and starting `zrepl` services
* ⇢ ⇢ ⇢ Verifying replication
* ⇢ ⇢ ⇢ Monitoring replication
-* ⇢ ⇢ ⇢ A note about the Bhyve VM replication
-* ⇢ ⇢ ⇢ Quick status check commands
* ⇢ ⇢ ⇢ Verifying replication after reboot
* ⇢ ⇢ ⇢ Understanding Failover Limitations and Design Decisions
* ⇢ ⇢ ⇢# Why Manual Failover?
@@ -50,6 +44,10 @@ This is the sixth blog post about the f3s series for self-hosting demands in a h
* ⇢ ⇢ ⇢ Configuring automatic key loading on boot
* ⇢ ⇢ ⇢ Troubleshooting: Replication broken due to modified destination
* ⇢ ⇢ ⇢ Forcing a full resync
+* ⇢ ⇢ CARP (Common Address Redundancy Protocol)
+* ⇢ ⇢ ⇢ How CARP Works
+* ⇢ ⇢ ⇢ Configuring CARP
+* ⇢ ⇢ ⇢ CARP State Change Notifications
* ⇢ ⇢ Future Storage Explorations
* ⇢ ⇢ ⇢ MinIO for S3-Compatible Object Storage
* ⇢ ⇢ ⇢ MooseFS for Distributed High Availability
@@ -183,11 +181,7 @@ Using USB flash drives as hardware key storage provides an elegant solution. The
### UFS on USB keys
-
-We'll format the USB drives with UFS (Unix File System) rather than ZFS for several reasons:
-
-* Simplicity: UFS has less overhead for small, removable media
-* Reliability: No ZFS pool import/export issues with removable devices
+We'll format the USB drives with UFS (Unix File System) rather than ZFS for simplicity. There is no need to use ZFS.
Let's see the USB keys:
@@ -348,107 +342,9 @@ zroot/bhyve/rocky encryptionroot zroot/bhyve -
zroot/bhyve/rocky keystatus available -
```
-## CARP (Common Address Redundancy Protocol)
-
-High availability is crucial for storage systems. If the NFS server goes down, all pods lose access to their persistent data. CARP provides a solution by creating a virtual IP address that automatically moves between servers during failures.
-
-### How CARP Works
-
-CARP allows multiple hosts to share a virtual IP address (VIP). The hosts communicate using multicast to elect a MASTER, while others remain as BACKUP. When the MASTER fails, a BACKUP automatically promotes itself, and the VIP moves to the new MASTER. This happens within seconds, minimizing downtime.
-
-Key benefits for our storage system:
-* Automatic failover: No manual intervention required for basic failures
-* Transparent to clients: Pods continue using the same IP address
-* Works with stunnel: The VIP ensures encrypted connections follow the active server
-* Simple configuration: Just a single line in rc.conf
-
-### Configuring CARP
-
-First, add the CARP configuration to `/etc/rc.conf` on both f0 and f1:
-
-```sh
-# The virtual IP 192.168.1.138 will float between f0 and f1
-ifconfig_re0_alias0="inet vhid 1 pass testpass alias 192.168.1.138/32"
-```
-
-Parameters explained:
-* `vhid 1`: Virtual Host ID - must match on all CARP members
-* `pass testpass`: Password for CARP authentication (use a stronger password in production)
-* `alias 192.168.1.138/32`: The virtual IP address with a /32 netmask
-
-Next, update `/etc/hosts` on all nodes (n0, n1, n2, r0, r1, r2) to resolve the VIP hostname:
-
-```
-192.168.1.138 f3s-storage-ha f3s-storage-ha.lan f3s-storage-ha.lan.buetow.org
-192.168.2.138 f3s-storage-ha f3s-storage-ha.wg0 f3s-storage-ha.wg0.wan.buetow.org
-```
-
-This allows clients to connect to `f3s-storage-ha` regardless of which physical server is currently the MASTER.
-
-### CARP State Change Notifications
-
-To properly manage services during failover, we need to detect CARP state changes. FreeBSD's devd system can notify us when CARP transitions between MASTER and BACKUP states.
-
-Add this to `/etc/devd.conf` on both f0 and f1:
-
-paul@f0:~ % cat <<END | doas tee -a /etc/devd.conf
-notify 0 {
- match "system" "CARP";
- match "subsystem" "[0-9]+@[0-9a-z.]+";
- match "type" "(MASTER|BACKUP)";
- action "/usr/local/bin/carpcontrol.sh $subsystem $type";
-};
-END
-
-Next, create the CARP control script that will restart stunnel when CARP state changes:
-
-```sh
-paul@f0:~ % doas tee /usr/local/bin/carpcontrol.sh <<'EOF'
-#!/bin/sh
-# CARP state change handler for storage failover
-
-subsystem=$1
-state=$2
-
-logger "CARP state change: $subsystem is now $state"
-
-case "$state" in
- MASTER)
- # Restart stunnel to bind to the VIP
- service stunnel restart
- logger "Restarted stunnel for MASTER state"
- ;;
- BACKUP)
- # Stop stunnel since we can't bind to VIP as BACKUP
- service stunnel stop
- logger "Stopped stunnel for BACKUP state"
- ;;
-esac
-EOF
-
-paul@f0:~ % doas chmod +x /usr/local/bin/carpcontrol.sh
-
-# Copy the same script to f1
-paul@f0:~ % scp /usr/local/bin/carpcontrol.sh f1:/tmp/
-paul@f1:~ % doas mv /tmp/carpcontrol.sh /usr/local/bin/
-paul@f1:~ % doas chmod +x /usr/local/bin/carpcontrol.sh
-```
-
-Enable CARP in /boot/loader.conf:
-
-```sh
-paul@f0:~ % echo 'carp_load="YES"' | doas tee -a /boot/loader.conf
-carp_load="YES"
-paul@f1:~ % echo 'carp_load="YES"' | doas tee -a /boot/loader.conf
-carp_load="YES"
-```
-
-Then reboot both hosts or run `doas kldload carp` to load the module immediately.
-
-
## ZFS Replication with zrepl
-Data replication is the cornerstone of high availability. While CARP handles IP failover, we need continuous data replication to ensure the backup server has current data when it becomes active. Without replication, failover would result in data loss or require shared storage (like iSCSI), which introduces a single point of failure.
+Data replication is the cornerstone of high availability. While CARP handles IP failover (see later in this post), we need continuous data replication to ensure the backup server has current data when it becomes active. Without replication, failover would result in data loss or require shared storage (like iSCSI), which introduces a single point of failure.
### Understanding Replication Requirements
@@ -459,32 +355,23 @@ Our storage system has different replication needs:
The replication frequency determines your Recovery Point Objective (RPO) - the maximum acceptable data loss. With 1-minute replication, you lose at most 1 minute of changes during an unplanned failover.
-### Why zrepl instead of HAST?
+### Why `zrepl` instead of HAST?
-While HAST (Highly Available Storage) is FreeBSD's native solution for high-availability storage, I've chosen zrepl for several important reasons:
+While HAST (Highly Available Storage) is FreeBSD's native solution for high-availability storage, I've chosen `zrepl` for several important reasons:
-1. HAST can cause ZFS corruption: HAST operates at the block level and doesn't understand ZFS's transactional semantics. During failover, in-flight transactions can lead to corrupted zpools. I've experienced this firsthand - the automatic failover would trigger while ZFS was still writing, resulting in an unmountable pool.
+* HAST can cause ZFS corruption: HAST operates at the block level and doesn't understand ZFS's transactional semantics. During failover, in-flight transactions can lead to corrupted zpools. I've experienced this firsthand - the automatic failover would trigger while ZFS was still writing, resulting in an unmountable pool.
+* ZFS-aware replication: `zrepl` understands ZFS datasets and snapshots. It replicates at the dataset level, ensuring each snapshot is a consistent point-in-time copy. This is fundamentally safer than block-level replication.
+* Snapshot history: With zrepl, you get multiple recovery points (every minute for NFS data in our setup). If corruption occurs, you can roll back to any previous snapshot. HAST only gives you the current state.
+* Easier recovery: When something goes wrong with zrepl, you still have intact snapshots on both sides. With HAST, a corrupted primary often means a corrupted secondary too.
-2. ZFS-aware replication: zrepl understands ZFS datasets and snapshots. It replicates at the dataset level, ensuring each snapshot is a consistent point-in-time copy. This is fundamentally safer than block-level replication.
-
-3. Snapshot history: With zrepl, you get multiple recovery points (every minute for NFS data in our setup). If corruption occurs, you can roll back to any previous snapshot. HAST only gives you the current state.
-
-4. Easier recovery: When something goes wrong with zrepl, you still have intact snapshots on both sides. With HAST, a corrupted primary often means a corrupted secondary too.
-
-5. Network flexibility: zrepl works over any TCP connection (in our case, WireGuard), while HAST requires dedicated network configuration.
-
-The 5-minute replication window is perfectly acceptable for my personal use cases. This isn't a high-frequency trading system or a real-time database - it's storage for personal projects, development work, and home lab experiments. Losing at most 5 minutes of work in a disaster scenario is a reasonable trade-off for the reliability and simplicity of snapshot-based replication.
+The 1-minute replication window is perfectly acceptable for my personal use cases. This isn't a high-frequency trading system or a real-time database—it's storage for personal projects, development work, and home lab experiments. Losing at most 1 minute of work in a disaster scenario is a reasonable trade-off for the reliability and simplicity of snapshot-based replication. Also, in the case of "1 minute of data loss," I would very likely still have the data available on the client side.
### Installing zrepl
-First, install zrepl on both hosts:
+First, install `zrepl` on both hosts involved (we will replicate data from `f0` to `f1`):
-```
-# On f0
+```sh
paul@f0:~ % doas pkg install -y zrepl
-
-# On f1
-paul@f1:~ % doas pkg install -y zrepl
```
### Checking ZFS pools
@@ -513,7 +400,7 @@ NAME USED AVAIL REFER MOUNTPOINT
zdata/enc 200K 899G 200K /data/enc
```
-### Configuring zrepl with WireGuard tunnel
+### Configuring `zrepl` with WireGuard tunnel
Since we have a WireGuard tunnel between f0 and f1, we'll use TCP transport over the secure tunnel instead of SSH. First, check the WireGuard IP addresses:
@@ -526,7 +413,7 @@ paul@f1:~ % ifconfig wg0 | grep inet
inet 192.168.2.131 netmask 0xffffff00
```
-### Configuring zrepl on f0 (source)
+### Configuring `zrepl` on f0 (source)
First, create a dedicated dataset for NFS data that will be replicated:
@@ -535,7 +422,7 @@ First, create a dedicated dataset for NFS data that will be replicated:
paul@f0:~ % doas zfs create zdata/enc/nfsdata
```
-Create the zrepl configuration on f0:
+Create the `zrepl` configuration on f0:
```sh
paul@f0:~ % doas tee /usr/local/etc/zrepl/zrepl.yml <<'EOF'
@@ -554,7 +441,7 @@ jobs:
filesystems:
"zdata/enc/nfsdata": true
send:
- encrypted: false
+ encrypted: true
snapshotting:
type: periodic
prefix: zrepl_
@@ -575,7 +462,7 @@ jobs:
filesystems:
"zroot/bhyve/fedora": true
send:
- encrypted: false
+ encrypted: true
snapshotting:
type: periodic
prefix: zrepl_
@@ -590,16 +477,21 @@ jobs:
EOF
```
-Key configuration notes:
-* We're using two separate replication jobs with different intervals:
- - `f0_to_f1_nfsdata`: Replicates NFS data every minute for faster failover recovery
- - `f0_to_f1_fedora`: Replicates Fedora VM every 10 minutes (less critical for NFS operations)
+ We're using two separate replication jobs with different intervals:
+
+* `f0_to_f1_nfsdata`: Replicates NFS data every minute for faster failover recovery
+* `f0_to_f1_fedora`: Replicates Fedora VM every 10 minutes (less critical for NFS operations)
+
+The Fedora is only used for development purposes, so it doesn't require as frequent replication as the NFS data. It's off-topic to this blog series, but it showcases, hows zrepl's flexibility in handling different datasets with varying replication needs.
+
+Furthermore:
+
* We're specifically replicating `zdata/enc/nfsdata` instead of the entire `zdata/enc` dataset. This dedicated dataset will contain all the data we later want to expose via NFS, keeping a clear separation between replicated NFS data and other local encrypted data.
* The `send: encrypted: false` option disables ZFS native encryption for the replication stream. Since we're using a WireGuard tunnel between f0 and f1, the data is already encrypted in transit. Disabling ZFS stream encryption reduces CPU overhead and improves replication performance.
-### Configuring zrepl on f1 (sink)
+### Configuring `zrepl` on `f1` (sink)
-Create the zrepl configuration on f1:
+On `f1` we configure `zrepl` to receive the data as follows:
```sh
# First create a dedicated sink dataset
@@ -613,7 +505,7 @@ global:
format: human
jobs:
- - name: "sink"
+ - name: sink
type: sink
serve:
type: tcp
@@ -627,41 +519,41 @@ jobs:
EOF
```
-### Enabling and starting zrepl services
+### Enabling and starting `zrepl` services
-Enable and start zrepl on both hosts:
+Enable and start `zrepl` on both hosts:
```sh
# On f0
paul@f0:~ % doas sysrc zrepl_enable=YES
zrepl_enable: -> YES
-paul@f0:~ % doas service zrepl start
+paul@f0:~ % doas service `zrepl` start
Starting zrepl.
# On f1
paul@f1:~ % doas sysrc zrepl_enable=YES
zrepl_enable: -> YES
-paul@f1:~ % doas service zrepl start
+paul@f1:~ % doas service `zrepl` start
Starting zrepl.
```
### Verifying replication
-Check the replication status:
+To check the replication status, we run:
```sh
-# On f0, check zrepl status (use raw mode for non-tty)
-paul@f0:~ % doas zrepl status --mode raw | grep -A2 "Replication"
+# On f0, check `zrepl` status (use raw mode for non-tty)
+paul@f0:~ % doas `zrepl` status --mode raw | grep -A2 "Replication"
"Replication":{"StartAt":"2025-07-01T22:31:48.712143123+03:00"...
# Check if services are running
-paul@f0:~ % doas service zrepl status
+paul@f0:~ % doas service `zrepl` status
zrepl is running as pid 2649.
-paul@f1:~ % doas service zrepl status
+paul@f1:~ % doas service `zrepl` status
zrepl is running as pid 2574.
-# Check for zrepl snapshots on source
+# Check for `zrepl` snapshots on source
paul@f0:~ % doas zfs list -t snapshot -r zdata/enc | grep zrepl
zdata/enc@zrepl_20250701_193148_000 0B - 176K -
@@ -683,91 +575,37 @@ You can monitor the replication progress with:
```sh
# Real-time status
-paul@f0:~ % doas zrepl status --mode interactive
+paul@f0:~ % doas `zrepl` status --mode interactive
# Check specific job details
-paul@f0:~ % doas zrepl status --job f0_to_f1
+paul@f0:~ % doas `zrepl` status --job f0_to_f1
```
-With this setup, both `zdata/enc/nfsdata` and `zroot/bhyve/fedora` on f0 will be automatically replicated to f1 every 5 minutes, with encrypted snapshots preserved on both sides. The pruning policy ensures that we keep the last 10 snapshots while managing disk space efficiently.
+With this setup, both `zdata/enc/nfsdata` and `zroot/bhyve/fedora` on f0 will be automatically replicated to f1 every 1 (or 10 in case of the Fedora VM) minutes, with encrypted snapshots preserved on both sides. The pruning policy ensures that we keep the last 10 snapshots while managing disk space efficiently.
The replicated data appears on f1 under `zdata/sink/` with the source host and dataset hierarchy preserved:
* `zdata/enc/nfsdata` → `zdata/sink/f0/zdata/enc/nfsdata`
* `zroot/bhyve/fedora` → `zdata/sink/f0/zroot/bhyve/fedora`
-This is by design - zrepl preserves the complete path from the source to ensure there are no conflicts when replicating from multiple sources. The replication uses the WireGuard tunnel for secure, encrypted transport between nodes.
-
-### A note about the Bhyve VM replication
-
-While replicating a Bhyve VM (Fedora in this case) is slightly off-topic for the f3s series, I've included it here as it demonstrates zrepl's flexibility. This is a development VM I use occasionally to log in remotely for certain development tasks. Having it replicated ensures I have a backup copy available on f1 if needed.
-
-### Quick status check commands
-
-Here are the essential commands to monitor replication status:
-
-```sh
-# On the source node (f0) - check if replication is active
-paul@f0:~ % doas zrepl status --job f0_to_f1 | grep -E '(State|Last)'
-State: done
-LastError:
-
-# List all zrepl snapshots on source
-paul@f0:~ % doas zfs list -t snapshot | grep zrepl
-zdata/enc/nfsdata@zrepl_20250701_202530_000 0B - 200K -
-zroot/bhyve/fedora@zrepl_20250701_202530_000 0B - 2.97G -
-
-# On the sink node (f1) - verify received datasets
-paul@f1:~ % doas zfs list -r zdata/sink
-NAME USED AVAIL REFER MOUNTPOINT
-zdata/sink 3.0G 896G 200K /data/sink
-zdata/sink/f0 3.0G 896G 200K none
-zdata/sink/f0/zdata 472K 896G 200K none
-zdata/sink/f0/zdata/enc 272K 896G 200K none
-zdata/sink/f0/zdata/enc/nfsdata 176K 896G 176K none
-zdata/sink/f0/zroot 2.9G 896G 200K none
-zdata/sink/f0/zroot/bhyve 2.9G 896G 200K none
-zdata/sink/f0/zroot/bhyve/fedora 2.9G 896G 2.97G none
-
-# Check received snapshots on sink
-paul@f1:~ % doas zfs list -t snapshot -r zdata/sink | grep zrepl | wc -l
- 3
-
-# Monitor replication progress in real-time (on source)
-paul@f0:~ % doas zrepl status --mode interactive
-
-# Check last replication time (on source)
-paul@f0:~ % doas zrepl status --job f0_to_f1 | grep -A1 "Replication"
-Replication:
- Status: Idle (last run: 2025-07-01T22:41:48)
-
-# View zrepl logs for troubleshooting
-paul@f0:~ % doas tail -20 /var/log/zrepl.log | grep -E '(error|warn|replication)'
-```
-
-These commands provide a quick way to verify that:
-
-* Replication jobs are running without errors
-* Snapshots are being created on the source
-* Data is being received on the sink
-* The replication schedule is being followed
+This is by design - `zrepl` preserves the complete path from the source to ensure there are no conflicts when replicating from multiple sources. The replication uses the WireGuard tunnel for secure, encrypted transport between nodes.
### Verifying replication after reboot
-The zrepl service is configured to start automatically at boot. After rebooting both hosts:
+The `zrepl` service is configured to start automatically at boot. After rebooting both hosts:
```sh
paul@f0:~ % uptime
11:17PM up 1 min, 0 users, load averages: 0.16, 0.06, 0.02
-paul@f0:~ % doas service zrepl status
+paul@f0:~ % doas service `zrepl` status
zrepl is running as pid 2366.
-paul@f1:~ % doas service zrepl status
+paul@f1:~ % doas service `zrepl` status
zrepl is running as pid 2309.
# Check that new snapshots are being created and replicated
-paul@f0:~ % doas zfs list -t snapshot | grep zrepl | tail -2
+paul@f0:~ % doas zfs list -t snapshot | grep `zrepl` | tail -2
zdata/enc/nfsdata@zrepl_20250701_202530_000 0B - 200K -
zroot/bhyve/fedora@zrepl_20250701_202530_000 0B - 2.97G -
@@ -780,6 +618,8 @@ The timestamps confirm that replication resumed automatically after the reboot,
### Understanding Failover Limitations and Design Decisions
+
+
#### Why Manual Failover?
This storage system intentionally uses manual failover rather than automatic failover. This might seem counterintuitive for a "high availability" system, but it's a deliberate design choice based on real-world experience:
@@ -816,7 +656,7 @@ For true high-availability NFS, you might consider:
Note: While HAST+CARP is often suggested for HA storage, it can cause filesystem corruption in practice, especially with ZFS. The block-level replication of HAST doesn't understand ZFS's transactional model, leading to inconsistent states during failover.
-The current zrepl setup, despite requiring manual intervention, is actually safer because:
+The current `zrepl` setup, despite requiring manual intervention, is actually safer because:
* ZFS snapshots are always consistent
* Replication is ZFS-aware (not just block-level)
@@ -912,12 +752,12 @@ paul@f0:~ % doas zfs destroy zdata/enc/nfsdata@failback
paul@f1:~ % doas zfs set readonly=on zdata/sink/f0/zdata/enc/nfsdata
paul@f1:~ % doas zfs destroy zdata/sink/f0/zdata/enc/nfsdata@failback
-# Stop zrepl services first - CRITICAL!
-paul@f0:~ % doas service zrepl stop
-paul@f1:~ % doas service zrepl stop
+# Stop `zrepl` services first - CRITICAL!
+paul@f0:~ % doas service `zrepl` stop
+paul@f1:~ % doas service `zrepl` stop
-# Clean up any zrepl snapshots on f0
-paul@f0:~ % doas zfs list -t snapshot -r zdata/enc/nfsdata | grep zrepl | \
+# Clean up any `zrepl` snapshots on f0
+paul@f0:~ % doas zfs list -t snapshot -r zdata/enc/nfsdata | grep `zrepl` | \
awk '{print $1}' | xargs -I {} doas zfs destroy {}
# Clean up and destroy the entire replicated structure on f1
@@ -953,19 +793,19 @@ paul@f1:~ % doas zfs load-key -L file:///keys/f0.lan.buetow.org:zdata.key \
zdata/sink/f0/zdata/enc/nfsdata
paul@f1:~ % doas zfs mount zdata/sink/f0/zdata/enc/nfsdata
-# Now restart zrepl services
-paul@f0:~ % doas service zrepl start
-paul@f1:~ % doas service zrepl start
+# Now restart `zrepl` services
+paul@f0:~ % doas service `zrepl` start
+paul@f1:~ % doas service `zrepl` start
# Verify replication is working
-paul@f0:~ % doas zrepl status --job f0_to_f1
+paul@f0:~ % doas `zrepl` status --job f0_to_f1
```
Important notes about failback:
* The `-F` flag forces a rollback on f0, destroying any local changes
* Replication often won't resume automatically after a forced receive
-* You must clean up old zrepl snapshots on both sides
+* You must clean up old `zrepl` snapshots on both sides
* Creating a manual snapshot helps re-establish the replication relationship
* Always verify replication status after the failback procedure
* The first replication after failback will be a full send of the current state
@@ -976,7 +816,7 @@ Here's a real test of the failback procedure:
```sh
# Simulate failure: Stop replication on f0
-paul@f0:~ % doas service zrepl stop
+paul@f0:~ % doas service `zrepl` stop
# On f1: Take over by making the dataset writable
paul@f1:~ % doas zfs set readonly=off zdata/sink/f0/zdata/enc/nfsdata
@@ -1015,7 +855,7 @@ Success! The failover data from f1 is now on f0. To resume normal replication, y
1. Clean up old snapshots on both sides
2. Create a new manual baseline snapshot
-3. Restart zrepl services
+3. Restart `zrepl` services
Key learnings from the test:
@@ -1086,9 +926,9 @@ Important notes:
If you see the error "cannot receive incremental stream: destination has been modified since most recent snapshot", it means the read-only flag was accidentally removed on f1. To fix without a full resync:
```sh
-# Stop zrepl on both servers
-paul@f0:~ % doas service zrepl stop
-paul@f1:~ % doas service zrepl stop
+# Stop `zrepl` on both servers
+paul@f0:~ % doas service `zrepl` stop
+paul@f1:~ % doas service `zrepl` stop
# Find the last common snapshot
paul@f0:~ % doas zfs list -t snapshot -o name,creation zdata/enc/nfsdata
@@ -1101,8 +941,8 @@ paul@f1:~ % doas zfs rollback -r zdata/sink/f0/zdata/enc/nfsdata@zrepl_20250705_
paul@f1:~ % doas zfs set readonly=on zdata/sink/f0/zdata/enc/nfsdata
# Restart zrepl
-paul@f0:~ % doas service zrepl start
-paul@f1:~ % doas service zrepl start
+paul@f0:~ % doas service `zrepl` start
+paul@f1:~ % doas service `zrepl` start
```
### Forcing a full resync
@@ -1111,8 +951,8 @@ If replication gets out of sync and incremental updates fail:
```sh
# Stop services
-paul@f0:~ % doas service zrepl stop
-paul@f1:~ % doas service zrepl stop
+paul@f0:~ % doas service `zrepl` stop
+paul@f1:~ % doas service `zrepl` stop
# On f1: Release holds and destroy the dataset
paul@f1:~ % doas zfs holds -r zdata/sink/f0/zdata/enc/nfsdata | \
@@ -1137,17 +977,122 @@ paul@f1:~ % doas zfs mount zdata/sink/f0/zdata/enc/nfsdata
# Clean up and restart
paul@f0:~ % doas zfs destroy zdata/enc/nfsdata@resync
paul@f1:~ % doas zfs destroy zdata/sink/f0/zdata/enc/nfsdata@resync
-paul@f0:~ % doas service zrepl start
-paul@f1:~ % doas service zrepl start
+paul@f0:~ % doas service `zrepl` start
+paul@f1:~ % doas service `zrepl` start
```
ZFS auto scrubbing....~?
Backup of the keys on the key locations (all keys on all 3 USB keys)
+## CARP (Common Address Redundancy Protocol)
+
+High availability is crucial for storage systems. If the storage server goes down, all pods lose access to their persistent data. CARP provides a solution by creating a virtual IP address that automatically moves between servers during failures.
+
+### How CARP Works
+
+CARP allows two hosts to share a virtual IP address (VIP). The hosts communicate using multicast to elect a MASTER, while the other remain as BACKUP. When the MASTER fails, a BACKUP automatically promotes itself, and the VIP moves to the new MASTER. This happens within seconds.
+
+Key benefits for our storage system:
+
+* Automatic failover: No manual intervention is required for basic failures, although there are a few limitations. The backup will only have read-only access to the available data, as we will learn later. However, we could manually promote it to read-write if needed.
+* Transparent to clients: Pods continue using the same IP address
+* Works with stunnel: Behind the VIP there will be a `stunnel` process running, which ensures encrypted connections follow the active server
+* Simple configuration
+
+### Configuring CARP
+
+First, add the CARP configuration to `/etc/rc.conf` on both f0 and f1:
+
+```sh
+# The virtual IP 192.168.1.138 will float between f0 and f1
+ifconfig_re0_alias0="inet vhid 1 pass testpass alias 192.168.1.138/32"
+```
+
+Whereas:
+
+* `vhid 1`: Virtual Host ID - must match on all CARP members
+* `pass testpass`: Password for CARP authentication (if you follow this, use a different password!)
+* `alias 192.168.1.138/32`: The virtual IP address with a /32 netmask
+
+Next, update `/etc/hosts` on all nodes (n0, n1, n2, r0, r1, r2) to resolve the VIP hostname:
+
+```
+192.168.1.138 f3s-storage-ha f3s-storage-ha.lan f3s-storage-ha.lan.buetow.org
+```
+
+This allows clients to connect to `f3s-storage-ha` regardless of which physical server is currently the MASTER.
+
+### CARP State Change Notifications
+
+To properly manage services during failover, we need to detect CARP state changes. FreeBSD's devd system can notify us when CARP transitions between MASTER and BACKUP states.
+
+Add this to `/etc/devd.conf` on both f0 and f1:
+
+paul@f0:~ % cat <<END | doas tee -a /etc/devd.conf
+notify 0 {
+ match "system" "CARP";
+ match "subsystem" "[0-9]+@[0-9a-z.]+";
+ match "type" "(MASTER|BACKUP)";
+ action "/usr/local/bin/carpcontrol.sh $subsystem $type";
+};
+END
+
+Next, create the CARP control script that will restart stunnel when CARP state changes:
+
+```sh
+paul@f0:~ % doas tee /usr/local/bin/carpcontrol.sh <<'EOF'
+#!/bin/sh
+# CARP state change control script
+
+case "$1" in
+ MASTER)
+ logger "CARP state changed to MASTER, starting services"
+ service rpcbind start >/dev/null 2>&1
+ service mountd start >/dev/null 2>&1
+ service nfsd start >/dev/null 2>&1
+ service nfsuserd start >/dev/null 2>&1
+ service stunnel restart >/dev/null 2>&1
+ logger "CARP MASTER: NFS and stunnel services started"
+ ;;
+ BACKUP)
+ logger "CARP state changed to BACKUP, stopping services"
+ service stunnel stop >/dev/null 2>&1
+ service nfsd stop >/dev/null 2>&1
+ service mountd stop >/dev/null 2>&1
+ service nfsuserd stop >/dev/null 2>&1
+ logger "CARP BACKUP: NFS and stunnel services stopped"
+ ;;
+ *)
+ logger "CARP state changed to $1 (unhandled)"
+ ;;
+esac
+EOF
+
+paul@f0:~ % doas chmod +x /usr/local/bin/carpcontrol.sh
+
+# Copy the same script to f1
+paul@f0:~ % scp /usr/local/bin/carpcontrol.sh f1:/tmp/
+paul@f1:~ % doas mv /tmp/carpcontrol.sh /usr/local/bin/
+paul@f1:~ % doas chmod +x /usr/local/bin/carpcontrol.sh
+```
+
+Note that we perform several tasks in the `carpcontrol.sh` script, which starts and/or stops all the services required for an NFS server running over an encrypted tunnel (via `stunnel`). We will set up all those services later in this blog post!
+
+To enable CARP in /boot/loader.conf, run:
+
+```sh
+paul@f0:~ % echo 'carp_load="YES"' | doas tee -a /boot/loader.conf
+carp_load="YES"
+paul@f1:~ % echo 'carp_load="YES"' | doas tee -a /boot/loader.conf
+carp_load="YES"
+```
+
+Then reboot both hosts or run `doas kldload carp` to load the module immediately.
+
## Future Storage Explorations
-While zrepl provides excellent snapshot-based replication for disaster recovery, there are other storage technologies worth exploring for the f3s project:
+While `zrepl` provides excellent snapshot-based replication for disaster recovery, there are other storage technologies worth exploring for the f3s project:
### MinIO for S3-Compatible Object Storage
@@ -1913,7 +1858,7 @@ With NFS servers running on both f0 and f1 and stunnel bound to the CARP VIP:
* Data consistency: ZFS replication ensures f1 has recent data (within 5-minute window)
* Read-only replica: The replicated dataset on f1 is always mounted read-only to prevent breaking replication
* Manual intervention required for full RW failover: When f1 becomes MASTER, you must:
- 1. Stop zrepl to prevent conflicts: `doas service zrepl stop`
+ 1. Stop `zrepl` to prevent conflicts: `doas service `zrepl` stop`
2. Make the replicated dataset writable: `doas zfs set readonly=off zdata/sink/f0/zdata/enc/nfsdata`
3. Ensure encryption keys are loaded (should be automatic with zfskeys_enable)
4. NFS will automatically start serving read/write requests through the VIP
@@ -2116,7 +2061,7 @@ To check if replication is working correctly:
```sh
# Check replication status
-paul@f0:~ % doas zrepl status
+paul@f0:~ % doas `zrepl` status
# Check recent snapshots on source
paul@f0:~ % doas zfs list -t snapshot -o name,creation zdata/enc/nfsdata | tail -5
@@ -2128,8 +2073,8 @@ paul@f1:~ % doas zfs list -t snapshot -o name,creation zdata/sink/f0/zdata/enc/n
paul@f1:~ % ls -la /data/nfs/k3svolumes/
```
-Important: If you see "connection refused" errors in zrepl logs, ensure:
-* Both servers have zrepl running (`doas service zrepl status`)
+Important: If you see "connection refused" errors in `zrepl` logs, ensure:
+* Both servers have `zrepl` running (`doas service `zrepl` status`)
* No firewall or hosts.allow rules are blocking port 8888
* WireGuard is up if using WireGuard IPs for replication
@@ -2156,9 +2101,9 @@ paul@f0:~ % doas showmount -e localhost
# Test write access
[root@r0 ~]# echo "Test after reboot $(date)" > /data/nfs/k3svolumes/test-reboot.txt
-# Verify zrepl is running and replicating
-paul@f0:~ % doas service zrepl status
-paul@f1:~ % doas service zrepl status
+# Verify `zrepl` is running and replicating
+paul@f0:~ % doas service `zrepl` status
+paul@f1:~ % doas service `zrepl` status
```
### Integration with Kubernetes
@@ -2615,7 +2560,7 @@ For reference, with AES-256-GCM on a typical mini PC:
### Replication Bandwidth
-ZFS replication with zrepl is efficient, only sending changed blocks:
+ZFS replication with `zrepl` is efficient, only sending changed blocks:
* Initial sync: Full dataset size (can be large)
* Incremental: Typically <1% of dataset size per snapshot
@@ -2737,7 +2682,7 @@ The storage layer is the foundation for any serious Kubernetes deployment. By bu
* FreeBSD CARP documentation: https://docs.freebsd.org/en/books/handbook/advanced-networking/#carp
* ZFS encryption guide: https://docs.freebsd.org/en/books/handbook/zfs/#zfs-encryption
* Stunnel documentation: https://www.stunnel.org/docs.html
-* zrepl documentation: https://zrepl.github.io/
+* `zrepl` documentation: https://zrepl.github.io/
Other *BSD-related posts:
diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml
index 58e2e2a2..e7e59869 100644
--- a/gemfeed/atom.xml
+++ b/gemfeed/atom.xml
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
- <updated>2025-07-02T00:37:08+03:00</updated>
+ <updated>2025-07-12T22:45:27+03:00</updated>
<title>foo.zone feed</title>
<subtitle>To be in the .zone!</subtitle>
<link href="gemini://foo.zone/gemfeed/atom.xml" rel="self" />
@@ -781,7 +781,7 @@
<li>⇢ <a href='#where-and-how-to-get-it'>Where and how to get it</a></li>
<li>⇢ <a href='#lessons-learned-from-building-task-samurai-with-agentic-coding'>Lessons learned from building Task Samurai with agentic coding</a></li>
<li>⇢ ⇢ <a href='#developer-workflow'>Developer workflow</a></li>
-<li>⇢ ⇢ <a href='#how-it-went-down'>How it went down</a></li>
+<li>⇢ ⇢ <a href='#how-it-went'>How it went</a></li>
<li>⇢ ⇢ <a href='#what-went-wrong'>What went wrong</a></li>
<li>⇢ ⇢ <a href='#patterns-that-helped'>Patterns that helped</a></li>
<li>⇢ ⇢ <a href='#what-i-learned-using-agentic-coding'>What I learned using agentic coding</a></li>
@@ -798,6 +798,7 @@
<h3 style='display: inline' id='why-does-this-exist'>Why does this exist?</h3><br />
<br />
<span>I wanted to tinker with agentic coding. This project was implemented entirely using OpenAI Codex. (After this blog post was published, I also used the Claude Code CLI.)</span><br />
+<br />
<ul>
<li>I wanted a faster UI for Taskwarrior than other options, like Vit, which is Python-based.</li>
<li>I wanted something built with Bubble Tea, but I never had time to dive deep into it.</li>
@@ -825,17 +826,19 @@
<br />
<h3 style='display: inline' id='developer-workflow'>Developer workflow</h3><br />
<br />
-<span>I was trying out OpenAI Codex because I regularly run out of Claude Code CLI (another agentic coding tool I am trying out currently) credits (it still happens!), but Codex was still available to me. So, I seized the opportunity to push agentic coding a bit more using another platform.</span><br />
+<span>I was trying out OpenAI Codex because I regularly run out of Claude Code CLI (another agentic coding tool I am currently trying out) credits (it still happens!), but Codex was still available to me. So, I took the opportunity to push agentic coding a bit further with another platform.</span><br />
<br />
<span>I didn&#39;t really love the web UI you have to use for Codex, as I usually live in the terminal. But this is all I have for Codex for now, and I thought I&#39;d give it a try regardless. The web UI is simple and pretty straightforward. There&#39;s also a Codex CLI one could use directly in the terminal, but I didn&#39;t get it working. I will try again soon.</span><br />
<br />
+<span class='quote'>Update: Codex CLI now works for me, after OpenAI released a new version!</span><br />
+<br />
<span>For every task given to Codex, it spins up its own container. From there, you can drill down and watch what it is doing. At the end, the result (in the form of a code diff) will be presented. From there, you can make suggestions about what else to change in the codebase. What I found inconvenient is that for every additional change, there&#39;s an overhead because Codex has to spin up a container and bootstrap the entire development environment again, which adds extra delay. That could be eliminated by setting up predefined custom containers, but that feature still seems somewhat limited.</span><br />
<br />
-<span>Once satisfied, you can ask Codex to create a GitHub PR; from there, you can merge it and then pull it to your local laptop or workstation to test the changes again. I found myself looping a lot around the Codex UI, GitHub PRs, and local checkouts.</span><br />
+<span>Once satisfied, you can ask Codex to create a GitHub PR (too bad only GitHub is supported and no other Git hosters); from there, you can merge it and then pull it to your local laptop or workstation to test the changes again. I found myself looping a lot around the Codex UI, GitHub PRs, and local checkouts. </span><br />
<br />
-<h3 style='display: inline' id='how-it-went-down'>How it went down</h3><br />
+<h3 style='display: inline' id='how-it-went'>How it went</h3><br />
<br />
-<span>Task Samurai&#39;s codebase came together quickly: the entire Git history spans from June 19 to 22, 2025, culminating in 179 commits. Here are the broad strokes:</span><br />
+<span>Task Samurai&#39;s codebase came together quickly: the entire Git history spans from June 19 to 22, 2025, culminating in 179 commits:</span><br />
<br />
<ul>
<li>June 19: Scaffolded the Go boilerplate, set up tests, integrated the Bubble Tea UI framework, and got the first table views showing up.</li>
@@ -849,7 +852,7 @@
<br />
<h3 style='display: inline' id='what-went-wrong'>What went wrong</h3><br />
<br />
-<span>Going agentic isn&#39;t all smooth sailing. Here are the hiccups I ran into, plus a few hard-earned lessons:</span><br />
+<span>Going agentic isn&#39;t all smooth. Here are the hiccups I ran into, plus a few lessons:</span><br />
<br />
<ul>
<li>Merge Floods: Every minor feature or fix existed on its branch, so merging was a constant process. It kept progress flowing but also drowned the committed history in noise and the occasional conflict. I found this to be an issue with OpenAI&#39;s Codex in particular. Not so much with other agentic coding tools like Claude Code CLI (not covered in this blog post.)</li>
@@ -865,29 +868,28 @@
<li>Tests Matter: A solid base of unit tests for task manipulations kept things from breaking entirely when experimenting.</li>
<li>Live Documentation: Documentation, such as the README, is updated regularly to reflect all the hotkey and feature changes.</li>
</ul><br />
+<span>Maybe a better approach would have been to design the whole application from scratch before letting Codix do any of the coding. I will try that with my next toy project.</span><br />
<br />
<h3 style='display: inline' id='what-i-learned-using-agentic-coding'>What I learned using agentic coding</h3><br />
<br />
-<span>Stepping into agentic coding with Codex as my "pair programmer" was a genuine shift. I learned a lot—not just about automating code generation, but also about how you have to tightly steer, guide, and audit every line as things move at breakneck speed. I must admit, I sometimes lost track of what all the generated code was actually doing. But as the features seemed to work after a few iterations, I was satisfied—which is a bit concerning. Imagine if I approved a PR for a production-grade deployment without fully understanding what it was doing (and not a toy project like in this post).</span><br />
-<br />
-<span>Discussing requirements with Codex forced me to clarify features and spot logical pitfalls earlier. All those fast iterations meant I was constantly coaxing more helpful, less ambiguous code out of the model—making me rethink how to break features into clear, testable steps.</span><br />
+<span>Stepping into agentic coding with Codex as my "pair programmer" was a big shift. I learned a lot—not just about automating code generation, but also about how you have to tightly steer, guide, and audit every line as things move at high speed. I must admit, I sometimes lost track of what all the generated code was actually doing. But as the features seemed to work after a few iterations, I was satisfied—which is a bit concerning. Imagine if I approved a PR for a production-grade deployment without fully understanding what it was doing (and not a toy project like in this post).</span><br />
<br />
<h3 style='display: inline' id='how-much-time-did-i-save'>how much time did I save?</h3><br />
<br />
-<span>Did it buy me speed? Let&#39;s do some back-of-the-envelope math:</span><br />
+<span>Did it buy me speed? </span><br />
<br />
<ul>
<li>Say each commit takes Codex 5 minutes to generate, and you need to review/guide 179 commits = about _6 hours of active development_.</li>
<li>If you coded it all yourself, including all the bug fixes, features, design, and documentation, you might spend _10–20 hours_.</li>
-<li>That&#39;s a couple of days potential savings.</li>
+<li>That&#39;s a couple of days of potential savings—and I am by no means an expert in agentic coding, since this was my first completed agentic coding project.</li>
</ul><br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
-<span>Building Task Samurai with agentic coding was a wild ride—rapid feature growth, plenty of churns, countless fast fixes, and more merge commits I&#39;d expected. Keep the iterations short (or maybe in my next experiment, much larger, with better and more complete design before generating a single line of code), keep tests and documentation concise, and review and refine for final polish at the end. Even with the bumps along the way, shipping a polished terminal UI in days instead of weeks is a testament to the raw power (and some hazards) of agentic development.</span><br />
+<span>Building Task Samurai with agentic coding was a wild ride—rapid feature growth, countless fast fixes, and more merge commits I&#39;d expected. Keep the iterations short (or maybe in my next experiment, much larger, with better and more complete design before generating a single line of code), keep tests and documentation concise, and review and refine for final polish at the end. Even with the bumps along the way, shipping a polished terminal UI in days instead of weeks is a testament to the power of agentic development.</span><br />
<br />
<span>Am I an agentic coding expert now? I don&#39;t think so. There are still many things to learn, and the landscape is constantly evolving.</span><br />
<br />
-<span>While working on Task Samurai, there were times I genuinely missed manual coding and the satisfaction that comes from writing every line yourself, debugging issues manually, and crafting solutions from scratch. However, this is the direction in which the industry seems to be shifting, unfortunately. If applied correctly, AI will boost performance, and if you don&#39;t use AI, your next performance review may be awkward.</span><br />
+<span>While working on Task Samurai, there were times I missed manual coding and the satisfaction that comes from writing every line yourself, debugging issues manually, and crafting solutions from scratch. However, this is the direction in which the industry seems to be shifting, unfortunately. If applied correctly, AI will boost performance, and if you don&#39;t use AI, your next performance review may be awkward.</span><br />
<br />
<span>Personally, I am not sure whether I like where the industry is going with agentic coding. I love "traditional" coding, and with agentic coding you operate at a higher level and don&#39;t interact directly with code as often, which I would miss. I think that in the future, designing, reviewing, and being able to read and understand code will be more important than writing code by hand.</span><br />
<br />
diff --git a/index.gmi b/index.gmi
index d57932ad..3f3e284c 100644
--- a/index.gmi
+++ b/index.gmi
@@ -1,6 +1,6 @@
# Hello!
-> This site was generated at 2025-07-09T15:05:37+03:00 by `Gemtexter`
+> This site was generated at 2025-07-12T22:45:27+03:00 by `Gemtexter`
Welcome to the foo.zone!
diff --git a/uptime-stats.gmi b/uptime-stats.gmi
index 7f34f690..fee2d77c 100644
--- a/uptime-stats.gmi
+++ b/uptime-stats.gmi
@@ -1,6 +1,6 @@
# My machine uptime stats
-> This site was last updated at 2025-07-09T15:05:36+03:00
+> This site was last updated at 2025-07-12T22:45:27+03:00
The following stats were collected via `uptimed` on all of my personal computers over many years and the output was generated by `guprecords`, the global uptime records stats analyser of mine.
@@ -30,8 +30,8 @@ Boots is the total number of host boots over the entire lifespan.
| 7. | *makemake | 76 | Linux 6.9.9-200.fc40.x86_64 |
| 8. | *uranus | 59 | NetBSD 10.1 |
| 9. | pluto | 51 | Linux 3.2.0-4-amd64 |
-| 10. | mega15289 | 50 | Darwin 23.4.0 |
-| 11. | *mega-m3-pro | 50 | Darwin 24.5.0 |
+| 10. | *mega-m3-pro | 50 | Darwin 24.5.0 |
+| 11. | mega15289 | 50 | Darwin 23.4.0 |
| 12. | *t450 | 43 | FreeBSD 14.2-RELEASE |
| 13. | *fishfinger | 43 | OpenBSD 7.6 |
| 14. | phobos | 40 | Linux 3.4.0-CM-g1dd7cdf |
@@ -39,8 +39,8 @@ Boots is the total number of host boots over the entire lifespan.
| 16. | *blowfish | 38 | OpenBSD 7.6 |
| 17. | sun | 33 | FreeBSD 10.3-RELEASE-p24 |
| 18. | f2 | 25 | FreeBSD 14.2-RELEASE-p1 |
-| 19. | moon | 20 | FreeBSD 14.0-RELEASE-p3 |
-| 20. | f1 | 20 | FreeBSD 14.2-RELEASE-p1 |
+| 19. | f1 | 20 | FreeBSD 14.2-RELEASE-p1 |
+| 20. | moon | 20 | FreeBSD 14.0-RELEASE-p3 |
+-----+----------------+-------+------------------------------+
```
@@ -55,7 +55,7 @@ Uptime is the total uptime of a host over the entire lifespan.
| 1. | vulcan | 4 years, 5 months, 6 days | Linux 3.10.0-1160.81.1.el7.x86_64 |
| 2. | sun | 3 years, 9 months, 26 days | FreeBSD 10.3-RELEASE-p24 |
| 3. | *uranus | 3 years, 9 months, 5 days | NetBSD 10.1 |
-| 4. | *earth | 3 years, 6 months, 27 days | Linux 6.15.4-200.fc42.x86_64 |
+| 4. | *earth | 3 years, 6 months, 30 days | Linux 6.15.4-200.fc42.x86_64 |
| 5. | *blowfish | 3 years, 5 months, 16 days | OpenBSD 7.6 |
| 6. | uugrn | 3 years, 5 months, 5 days | FreeBSD 11.2-RELEASE-p4 |
| 7. | deltavega | 3 years, 1 months, 21 days | Linux 3.10.0-1160.11.1.el7.x86_64 |
@@ -69,7 +69,7 @@ Uptime is the total uptime of a host over the entire lifespan.
| 15. | host0 | 1 years, 3 months, 9 days | FreeBSD 6.2-RELEASE-p5 |
| 16. | *makemake | 1 years, 3 months, 5 days | Linux 6.9.9-200.fc40.x86_64 |
| 17. | tauceti-e | 1 years, 2 months, 20 days | Linux 3.2.0-4-amd64 |
-| 18. | *mega-m3-pro | 1 years, 1 months, 26 days | Darwin 24.5.0 |
+| 18. | *mega-m3-pro | 1 years, 2 months, 5 days | Darwin 24.5.0 |
| 19. | callisto | 0 years, 10 months, 31 days | Linux 4.0.4-303.fc22.x86_64 |
| 20. | alphacentauri | 0 years, 10 months, 28 days | FreeBSD 11.4-RELEASE-p7 |
+-----+----------------+-----------------------------+-----------------------------------+
@@ -150,7 +150,7 @@ Lifespan is the total uptime + the total downtime of a host.
| 3. | alphacentauri | 6 years, 9 months, 13 days | FreeBSD 11.4-RELEASE-p7 |
| 4. | vulcan | 4 years, 5 months, 6 days | Linux 3.10.0-1160.81.1.el7.x86_64 |
| 5. | *makemake | 4 years, 4 months, 7 days | Linux 6.9.9-200.fc40.x86_64 |
-| 6. | *earth | 3 years, 12 months, 14 days | Linux 6.15.4-200.fc42.x86_64 |
+| 6. | *earth | 3 years, 12 months, 17 days | Linux 6.15.4-200.fc42.x86_64 |
| 7. | sun | 3 years, 10 months, 2 days | FreeBSD 10.3-RELEASE-p24 |
| 8. | *blowfish | 3 years, 5 months, 17 days | OpenBSD 7.6 |
| 9. | uugrn | 3 years, 5 months, 5 days | FreeBSD 11.2-RELEASE-p4 |
@@ -188,14 +188,14 @@ Boots is the total number of host boots over the entire lifespan.
| 10. | Darwin 13... | 40 |
| 11. | Darwin 23... | 33 |
| 12. | FreeBSD 5... | 25 |
-| 13. | *Darwin 24... | 22 |
-| 14. | Linux 2... | 22 |
+| 13. | Linux 2... | 22 |
+| 14. | *Darwin 24... | 22 |
| 15. | Darwin 21... | 17 |
| 16. | Darwin 15... | 15 |
| 17. | Darwin 22... | 12 |
| 18. | Darwin 18... | 11 |
-| 19. | OpenBSD 4... | 10 |
-| 20. | FreeBSD 6... | 10 |
+| 19. | FreeBSD 6... | 10 |
+| 20. | FreeBSD 7... | 10 |
+-----+----------------+-------+
```
@@ -211,7 +211,7 @@ Uptime is the total uptime of a host over the entire lifespan.
| 2. | *OpenBSD 7... | 6 years, 9 months, 24 days |
| 3. | FreeBSD 10... | 5 years, 9 months, 9 days |
| 4. | Linux 5... | 4 years, 10 months, 21 days |
-| 5. | *Linux 6... | 2 years, 9 months, 24 days |
+| 5. | *Linux 6... | 2 years, 9 months, 27 days |
| 6. | Linux 4... | 2 years, 7 months, 22 days |
| 7. | FreeBSD 11... | 2 years, 4 months, 28 days |
| 8. | Linux 2... | 1 years, 11 months, 21 days |
@@ -224,7 +224,7 @@ Uptime is the total uptime of a host over the entire lifespan.
| 15. | Darwin 18... | 0 years, 7 months, 5 days |
| 16. | Darwin 22... | 0 years, 6 months, 22 days |
| 17. | Darwin 15... | 0 years, 6 months, 15 days |
-| 18. | *Darwin 24... | 0 years, 5 months, 30 days |
+| 18. | *Darwin 24... | 0 years, 6 months, 8 days |
| 19. | FreeBSD 5... | 0 years, 5 months, 18 days |
| 20. | FreeBSD 13... | 0 years, 4 months, 2 days |
+-----+----------------+------------------------------+
@@ -242,7 +242,7 @@ Score is calculated by combining all other metrics.
| 2. | *OpenBSD 7... | 435 |
| 3. | FreeBSD 10... | 406 |
| 4. | Linux 5... | 317 |
-| 5. | *Linux 6... | 189 |
+| 5. | *Linux 6... | 190 |
| 6. | Linux 4... | 175 |
| 7. | FreeBSD 11... | 159 |
| 8. | Linux 2... | 121 |
@@ -253,11 +253,11 @@ Score is calculated by combining all other metrics.
| 13. | OpenBSD 4... | 39 |
| 14. | Darwin 21... | 38 |
| 15. | Darwin 18... | 32 |
-| 16. | *Darwin 24... | 30 |
+| 16. | *Darwin 24... | 31 |
| 17. | Darwin 22... | 30 |
| 18. | Darwin 15... | 29 |
-| 19. | FreeBSD 5... | 25 |
-| 20. | FreeBSD 13... | 25 |
+| 19. | FreeBSD 13... | 25 |
+| 20. | FreeBSD 5... | 25 |
+-----+----------------+-------+
```
@@ -285,10 +285,10 @@ Uptime is the total uptime of a host over the entire lifespan.
+-----+------------+------------------------------+
| Pos | KernelName | Uptime |
+-----+------------+------------------------------+
-| 1. | *Linux | 27 years, 10 months, 17 days |
+| 1. | *Linux | 27 years, 10 months, 19 days |
| 2. | *FreeBSD | 11 years, 5 months, 3 days |
| 3. | *OpenBSD | 7 years, 5 months, 5 days |
-| 4. | *Darwin | 4 years, 9 months, 17 days |
+| 4. | *Darwin | 4 years, 9 months, 26 days |
| 5. | *NetBSD | 0 years, 1 months, 1 days |
+-----+------------+------------------------------+
```
@@ -301,10 +301,10 @@ Score is calculated by combining all other metrics.
+-----+------------+-------+
| Pos | KernelName | Score |
+-----+------------+-------+
-| 1. | *Linux | 1848 |
+| 1. | *Linux | 1849 |
| 2. | *FreeBSD | 799 |
| 3. | *OpenBSD | 474 |
-| 4. | *Darwin | 313 |
+| 4. | *Darwin | 314 |
| 5. | *NetBSD | 2 |
+-----+------------+-------+
```