summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--about/resources.gmi204
-rw-r--r--about/showcase.gmi349
-rw-r--r--about/showcase.gmi.tpl305
-rw-r--r--about/showcase/debroid/image-1.png40
-rw-r--r--gemfeed/DRAFT-taskwarrior-autonomous-agent-loop.gmi477
-rw-r--r--gemfeed/DRAFT-taskwarrior-autonomous-agent-loop.gmi.tpl451
6 files changed, 450 insertions, 1376 deletions
diff --git a/about/resources.gmi b/about/resources.gmi
index f6f40292..8dca4ce3 100644
--- a/about/resources.gmi
+++ b/about/resources.gmi
@@ -35,110 +35,110 @@ You won't find any links on this site because, over time, the links will break.
In random order:
-* Data Science at the Command Line; Jeroen Janssens; O'Reilly
-* Systemprogrammierung in Go; Frank Müller; dpunkt
-* The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional
-* Terraform Cookbook; Mikael Krief; Packt Publishing
-* Java ist auch eine Insel; Christian Ullenboom;
-* Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt
-* Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press
-* Polished Ruby Programming; Jeremy Evans; Packt Publishing
-* Modern Perl; Chromatic ; Onyx Neon Press
-* The Docker Book; James Turnbull; Kindle
-* Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall & Jon Orwant; O'Reilly
* Developing Games in Java; David Brackeen and others...; New Riders
-* Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press
* Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner
-* Site Reliability Engineering; How Google runs production systems; O'Reilly
-* 100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications
-* Raku Fundamentals; Moritz Lenz; Apress
-* The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton
-* Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications
-* Effective awk programming; Arnold Robbins; O'Reilly
-* Ultimate Go Notebook; Bill Kennedy
-* Funktionale Programmierung; Peter Pepper; Springer
-* DevOps And Site Reliability Engineering Handbook; Stephen Fleming; Audible
-* Kubernetes Cookbook; Sameer Naik, Sébastien Goasguen, Jonathan Michaux; O'Reilly
-* Chaos Engineering - System Resiliency in Practice; Casey Rosenthal and Nora Jones; eBook
-* Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O'Reilly
-* Pro Puppet; James Turnbull, Jeffrey McCune; Apress
-* 21st Century C: C Tips from the New School; Ben Klemens; O'Reilly
-* The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress
-* Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly
-* Programming Ruby 3.3 (5th Edition); Noel Rappin, with Dave Thomas; The Pragmatic Bookshelf
-* Higher Order Perl; Mark Dominus; Morgan Kaufmann
-* The Kubernetes Book; Nigel Poulton; Unabridged Audiobook
* Concurrency in Go; Katherine Cox-Buday; O'Reilly
-* The Pragmatic Programmer; David Thomas; Addison-Wesley
-* Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers
-* 97 things every SRE should know; Emil Stolarsky, Jaime Woo; O'Reilly
* Seeking SRE: Conversations About Running Production Systems at Scale; David N. Blank-Edelman; eBook
* DNS and BIND; Cricket Liu; O'Reilly
-* Tmux 2: Productive Mouse-free Development; Brain P. Hogan; The Pragmatic Programmers
-* Effective Java; Joshua Bloch; Addison-Wesley Professional
+* Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers
+* The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton
* Perl New Features; Joshua McAdams, brian d foy; Perl School
+* The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional
+* Polished Ruby Programming; Jeremy Evans; Packt Publishing
+* Effective Java; Joshua Bloch; Addison-Wesley Professional
+* Chaos Engineering - System Resiliency in Practice; Casey Rosenthal and Nora Jones; eBook
* The DevOps Handbook; Gene Kim, Jez Humble, Patrick Debois, John Willis; Audible
+* Terraform Cookbook; Mikael Krief; Packt Publishing
+* Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press
+* Site Reliability Engineering; How Google runs production systems; O'Reilly
+* The Kubernetes Book; Nigel Poulton; Unabridged Audiobook
+* Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall & Jon Orwant; O'Reilly
+* Java ist auch eine Insel; Christian Ullenboom;
+* Funktionale Programmierung; Peter Pepper; Springer
* Distributed Systems: Principles and Paradigms; Andrew S. Tanenbaum; Pearson
+* Programming Ruby 3.3 (5th Edition); Noel Rappin, with Dave Thomas; The Pragmatic Bookshelf
+* Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly
+* 97 things every SRE should know; Emil Stolarsky, Jaime Woo; O'Reilly
* Raku Recipes; J.J. Merelo; Apress
-* Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O'Reilly
* C++ Programming Language; Bjarne Stroustrup;
+* Kubernetes Cookbook; Sameer Naik, Sébastien Goasguen, Jonathan Michaux; O'Reilly
* Leanring eBPF; Liz Rice; O'Reilly
+* Raku Fundamentals; Moritz Lenz; Apress
+* The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress
+* The Docker Book; James Turnbull; Kindle
+* Data Science at the Command Line; Jeroen Janssens; O'Reilly
+* DevOps And Site Reliability Engineering Handbook; Stephen Fleming; Audible
+* Ultimate Go Notebook; Bill Kennedy
+* Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O'Reilly
+* Effective awk programming; Arnold Robbins; O'Reilly
+* 100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications
+* Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt
+* Higher Order Perl; Mark Dominus; Morgan Kaufmann
+* Systemprogrammierung in Go; Frank Müller; dpunkt
+* Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications
+* The Pragmatic Programmer; David Thomas; Addison-Wesley
+* Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press
+* Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O'Reilly
+* Tmux 2: Productive Mouse-free Development; Brain P. Hogan; The Pragmatic Programmers
+* 21st Century C: C Tips from the New School; Ben Klemens; O'Reilly
+* Pro Puppet; James Turnbull, Jeffrey McCune; Apress
+* Modern Perl; Chromatic ; Onyx Neon Press
## Technical references
I didn't read them from the beginning to the end, but I am using them to look up things. The books are in random order:
-* Relayd and Httpd Mastery; Michael W Lucas
+* Groovy Kurz & Gut; Joerg Staudemeier; O'Reilly
* Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O'Reilly
+* Go: Design Patterns for Real-World Projects; Mat Ryer; Packt
+* The Linux Programming Interface; Michael Kerrisk; No Starch Press
+* Relayd and Httpd Mastery; Michael W Lucas
* BPF Performance Tools - Linux System and Application Observability, Brendan Gregg; Addison Wesley
* Algorithms; Robert Sedgewick, Kevin Wayne; Addison Wesley
-* The Linux Programming Interface; Michael Kerrisk; No Starch Press
-* Groovy Kurz & Gut; Joerg Staudemeier; O'Reilly
* Implementing Service Level Objectives; Alex Hidalgo; O'Reilly
-* Go: Design Patterns for Real-World Projects; Mat Ryer; Packt
## Self-development and soft-skills books
In random order:
+* Meditation for Mortals, Oliver Burkeman, Audiobook
+* Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing
+* Deep Work; Cal Newport; Piatkus
+* The Joy of Missing Out; Christina Crook; New Society Publishers
+* Psycho-Cybernetics; Maxwell Maltz; Perigee Books
* Slow Productivity; Cal Newport; Penguin Random House
-* The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select
-* So Good They Can't Ignore You; Cal Newport; Business Plus
+* The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books
+* Ultralearning; Anna Laurent; Self-published via Amazon
+* Getting Things Done; David Allen
+* Influence without Authority; A. Cohen, D. Bradford; Wiley
+* The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK
+* Soft Skills; John Sommez; Manning Publications
+* Staff Engineer: Leadership beyond the management track; Will Larson; Audiobook
* The Complete Software Developer's Career Guide; John Sonmez; Unabridged Audiobook
+* Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press
+* Stop starting, start finishing; Arne Roock; Lean-Kanban University
+* The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)
+* 101 Essays that change the way you think; Brianna Wiest; Audiobook
+* Digital Minimalism; Cal Newport; Portofolio Penguin
+* Atomic Habits; James Clear; Random House Business
* The Obstacle Is The Way; Ryan Holiday; Profile Books Ltd
-* Soft Skills; John Sommez; Manning Publications
-* The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books
-* Ultralearning; Scott Young; Thorsons
+* Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)
+* 97 Things Every Engineering Manager Should Know; Camille Fournier; Audiobook
+* Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion
* Time Management for System Administrators; Thomas A. Limoncelli; O'Reilly
-* Deep Work; Cal Newport; Piatkus
+* The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select
* The Courage to Be Disliked; Ichiro Kishimi and Fumitake Koga; Audiobook
+* Eat That Frog; Brian Tracy
+* The Good Enough Job; Simone Stolzoff; Ebury Edge
+* Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne
+* So Good They Can't Ignore You; Cal Newport; Business Plus
* Coders at Work - Reflections on the craft of programming, Peter Seibel and Mitchell Dorian et al., Audiobook
-* The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK
-* 101 Essays that change the way you think; Brianna Wiest; Audiobook
-* Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion
-* Influence without Authority; A. Cohen, D. Bradford; Wiley
-* The Joy of Missing Out; Christina Crook; New Society Publishers
-* Ultralearning; Anna Laurent; Self-published via Amazon
-* Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing
-* 97 Things Every Engineering Manager Should Know; Camille Fournier; Audiobook
* Eat That Frog!; Brian Tracy; Hodder Paperbacks
-* Psycho-Cybernetics; Maxwell Maltz; Perigee Books
-* Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press
-* Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)
-* The Power of Now; Eckhard Tolle; Yellow Kite
-* Digital Minimalism; Cal Newport; Portofolio Penguin
* The Bullet Journal Method; Ryder Carroll; Fourth Estate
-* Eat That Frog; Brian Tracy
-* Staff Engineer: Leadership beyond the management track; Will Larson; Audiobook
-* Never Split the Difference; Chris Voss, Tahl Raz; Random House Business
-* The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)
-* Getting Things Done; David Allen
-* Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne
+* Ultralearning; Scott Young; Thorsons
* The Software Engineer's Guidebook: Navigating senior, tech lead, and staff engineer positions at tech companies and startups; Gergely Orosz; Audiobook
-* Stop starting, start finishing; Arne Roock; Lean-Kanban University
-* The Good Enough Job; Simone Stolzoff; Ebury Edge
-* Meditation for Mortals, Oliver Burkeman, Audiobook
-* Atomic Habits; James Clear; Random House Business
+* The Power of Now; Eckhard Tolle; Yellow Kite
+* Never Split the Difference; Chris Voss, Tahl Raz; Random House Business
=> ../notes/index.gmi Here are notes of mine for some of the books
@@ -146,30 +146,30 @@ In random order:
Some of these were in-person with exams; others were online learning lectures only. In random order:
-* Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online
-* MySQL Deep Dive Workshop; 2-day on-site training
-* Protocol buffers; O'Reilly Online
-* Red Hat Certified System Administrator; Course + certification (Although I had the option, I decided not to take the next course as it is more effective to self learn what I need)
* The Ultimate Kubernetes Bootcamp; School of Devops; O'Reilly Online
* F5 Loadbalancers Training; 2-day on-site training; F5, Inc.
-* Cloud Operations on AWS - Learn how to configure, deploy, maintain, and troubleshoot your AWS environments; 3-day online live training with labs; Amazon
-* Developing IaC with Terraform (with Live Lessons); O'Reilly Online
* The Well-Grounded Rubyist Video Edition; David. A. Black; O'Reilly Online
+* Structure and Interpretation of Computer Programs; Harold Abelson and more...;
* Linux Security and Isolation APIs Training; Michael Kerrisk; 3-day on-site training
-* Functional programming lecture; Remote University of Hagen
+* Protocol buffers; O'Reilly Online
+* MySQL Deep Dive Workshop; 2-day on-site training
* AWS Immersion Day; Amazon; 1-day interactive online training
-* Structure and Interpretation of Computer Programs; Harold Abelson and more...;
+* Cloud Operations on AWS - Learn how to configure, deploy, maintain, and troubleshoot your AWS environments; 3-day online live training with labs; Amazon
* Scripting Vim; Damian Conway; O'Reilly Online
* Apache Tomcat Best Practises; 3-day on-site training
+* Red Hat Certified System Administrator; Course + certification (Although I had the option, I decided not to take the next course as it is more effective to self learn what I need)
+* Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online
+* Functional programming lecture; Remote University of Hagen
+* Developing IaC with Terraform (with Live Lessons); O'Reilly Online
* Ultimate Go Programming; Bill Kennedy; O'Reilly Online
## Technical guides
These are not whole books, but guides (smaller or larger) which I found very useful. in random order:
-* Advanced Bash-Scripting Guide
-* Raku Guide at https://raku.guide
* How CPUs work at https://cpu.land
+* Raku Guide at https://raku.guide
+* Advanced Bash-Scripting Guide
## Podcasts
@@ -177,58 +177,58 @@ These are not whole books, but guides (smaller or larger) which I found very use
In random order:
-* Pratical AI
-* Dev Interrupted
-* The ProdCast (Google SRE Podcast)
-* Wednesday Wisdom
-* Backend Banter
-* Hidden Brain
-* The Pragmatic Engineer Podcast
-* Fallthrough [Golang]
+* Fork Around And Find Out
* Cup o' Go [Golang]
-* BSD Now [BSD]
+* Fallthrough [Golang]
* Modern Mentor
-* Fork Around And Find Out
-* Maintainable
+* The Pragmatic Engineer Podcast
+* Backend Banter
* Deep Questions with Cal Newport
+* The ProdCast (Google SRE Podcast)
+* Hidden Brain
* The Changelog Podcast(s)
+* Dev Interrupted
+* BSD Now [BSD]
+* Maintainable
+* Pratical AI
+* Wednesday Wisdom
### Podcasts I liked
I liked them but am not listening to them anymore. The podcasts have either "finished" (no more episodes) or I stopped listening to them due to time constraints or a shift in my interests.
+* Modern Mentor
+* Go Time (predecessor of fallthrough)
* CRE: Chaosradio Express [german]
+* Java Pub House
* FLOSS weekly
* Ship It (predecessor of Fork Around And Find Out)
-* Go Time (predecessor of fallthrough)
-* Java Pub House
-* Modern Mentor
## Newsletters I like
This is a mix of tech and non-tech newsletters I am subscribed to. In random order:
-* Changelog News
-* Ruby Weekly
* VK Newsletter
* Register Spill
-* Applied Go Weekly Newsletter
-* The Imperfectionist
* byteSizeGo
-* The Valuable Dev
+* Changelog News
* The Pragmatic Engineer
-* Andreas Brandhorst Newsletter (Sci-Fi author)
+* The Valuable Dev
+* Applied Go Weekly Newsletter
* Golang Weekly
+* The Imperfectionist
+* Andreas Brandhorst Newsletter (Sci-Fi author)
+* Ruby Weekly
* Monospace Mentor
## Magazines I like(d)
This is a mix of tech I like(d). I may not be a current subscriber, but now and then, I buy an issue. In random order:
-* freeX (not published anymore)
* LWN (online only)
-* Linux Magazine
+* freeX (not published anymore)
* Linux User
+* Linux Magazine
# Formal education
diff --git a/about/showcase.gmi b/about/showcase.gmi
index 25e51895..9cf9ca46 100644
--- a/about/showcase.gmi
+++ b/about/showcase.gmi
@@ -1,6 +1,6 @@
# Project Showcase
-Generated on: 2026-02-21
+Generated on: 2026-02-22
This page showcases my side projects, providing an overview of what each project does, its technical implementation, and key metrics. Each project summary includes information about the programming languages used, development activity, and licensing. The projects are ranked by score, which combines project size and recent activity.
@@ -9,24 +9,24 @@ This page showcases my side projects, providing an overview of what each project
* ⇢ Project Showcase
* ⇢ ⇢ Overall Statistics
* ⇢ ⇢ Projects
-* ⇢ ⇢ ⇢ 1. hexai
+* ⇢ ⇢ ⇢ 1. ior
* ⇢ ⇢ ⇢ 2. dotfiles
-* ⇢ ⇢ ⇢ 3. epimetheus
-* ⇢ ⇢ ⇢ 4. conf
-* ⇢ ⇢ ⇢ 5. foo.zone
-* ⇢ ⇢ ⇢ 6. scifi
-* ⇢ ⇢ ⇢ 7. log4jbench
-* ⇢ ⇢ ⇢ 8. gogios
-* ⇢ ⇢ ⇢ 9. yoga
-* ⇢ ⇢ ⇢ 10. perc
-* ⇢ ⇢ ⇢ 11. totalrecall
-* ⇢ ⇢ ⇢ 12. ior
+* ⇢ ⇢ ⇢ 3. hexai
+* ⇢ ⇢ ⇢ 4. epimetheus
+* ⇢ ⇢ ⇢ 5. conf
+* ⇢ ⇢ ⇢ 6. foo.zone
+* ⇢ ⇢ ⇢ 7. scifi
+* ⇢ ⇢ ⇢ 8. log4jbench
+* ⇢ ⇢ ⇢ 9. gogios
+* ⇢ ⇢ ⇢ 10. yoga
+* ⇢ ⇢ ⇢ 11. perc
+* ⇢ ⇢ ⇢ 12. totalrecall
* ⇢ ⇢ ⇢ 13. gitsyncer
* ⇢ ⇢ ⇢ 14. tasksamurai
* ⇢ ⇢ ⇢ 15. foostats
* ⇢ ⇢ ⇢ 16. timr
-* ⇢ ⇢ ⇢ 17. dtail
-* ⇢ ⇢ ⇢ 18. gos
+* ⇢ ⇢ ⇢ 17. gos
+* ⇢ ⇢ ⇢ 18. dtail
* ⇢ ⇢ ⇢ 19. ds-sim
* ⇢ ⇢ ⇢ 20. wireguardmeshgenerator
* ⇢ ⇢ ⇢ 21. gemtexter
@@ -48,15 +48,15 @@ This page showcases my side projects, providing an overview of what each project
* ⇢ ⇢ ⇢ 37. mon
* ⇢ ⇢ ⇢ 38. staticfarm-apache-handlers
* ⇢ ⇢ ⇢ 39. pingdomfetch
-* ⇢ ⇢ ⇢ 40. fype
-* ⇢ ⇢ ⇢ 41. xerl
-* ⇢ ⇢ ⇢ 42. ychat
-* ⇢ ⇢ ⇢ 43. fapi
-* ⇢ ⇢ ⇢ 44. perl-c-fibonacci
-* ⇢ ⇢ ⇢ 45. netcalendar
-* ⇢ ⇢ ⇢ 46. loadbars
-* ⇢ ⇢ ⇢ 47. gotop
-* ⇢ ⇢ ⇢ 48. rubyfy
+* ⇢ ⇢ ⇢ 40. xerl
+* ⇢ ⇢ ⇢ 41. ychat
+* ⇢ ⇢ ⇢ 42. fapi
+* ⇢ ⇢ ⇢ 43. perl-c-fibonacci
+* ⇢ ⇢ ⇢ 44. netcalendar
+* ⇢ ⇢ ⇢ 45. loadbars
+* ⇢ ⇢ ⇢ 46. gotop
+* ⇢ ⇢ ⇢ 47. rubyfy
+* ⇢ ⇢ ⇢ 48. fype
* ⇢ ⇢ ⇢ 49. pwgrep
* ⇢ ⇢ ⇢ 50. perldaemon
* ⇢ ⇢ ⇢ 51. jsmstrade
@@ -75,36 +75,38 @@ This page showcases my side projects, providing an overview of what each project
## Overall Statistics
* 📦 Total Projects: 62
-* 📊 Total Commits: 12,551
-* 📈 Total Lines of Code: 311,290
-* 📄 Total Lines of Documentation: 41,076
-* 💻 Languages: Go (36.4%), Java (13.2%), C++ (8.1%), C (6.3%), XML (6.2%), Shell (5.9%), Perl (5.6%), C/C++ (5.2%), YAML (5.1%), HTML (1.9%), Config (1.2%), Ruby (1.0%), HCL (0.9%), Python (0.7%), CSS (0.6%), Make (0.5%), JSON (0.4%), Haskell (0.2%), JavaScript (0.2%), Raku (0.1%), TOML (0.1%)
-* 📚 Documentation: Markdown (69.8%), Text (28.9%), LaTeX (1.4%)
+* 📊 Total Commits: 12,631
+* 📈 Total Lines of Code: 317,099
+* 📄 Total Lines of Documentation: 40,325
+* 💻 Languages: Go (37.7%), Java (12.9%), C++ (8.0%), C (6.1%), XML (6.1%), Shell (5.8%), Perl (5.5%), C/C++ (5.1%), YAML (5.0%), HTML (1.9%), Config (1.2%), Ruby (0.9%), HCL (0.9%), Python (0.6%), CSS (0.6%), Make (0.5%), JSON (0.4%), Haskell (0.2%), JavaScript (0.2%), Raku (0.1%), TOML (0.1%)
+* 📚 Documentation: Markdown (71.4%), Text (27.2%), LaTeX (1.4%)
* 🚀 Release Status: 39 released, 23 experimental (62.9% with releases, 37.1% experimental)
## Projects
-### 1. hexai
+### 1. ior
-* 💻 Languages: Go (100.0%)
-* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 342
-* 📈 Lines of Code: 29895
-* 📄 Lines of Documentation: 5502
-* 📅 Development Period: 2025-08-01 to 2026-02-13
-* 🏆 Score: 365.1 (combines code size and activity)
+* 💻 Languages: Go (73.1%), C (26.3%), C/C++ (0.6%)
+* 📚 Documentation: Markdown (80.3%), Text (19.7%)
+* 📊 Commits: 384
+* 📈 Lines of Code: 21785
+* 📄 Lines of Documentation: 2428
+* 📅 Development Period: 2024-01-18 to 2026-02-21
+* 🏆 Score: 2219.6 (combines code size and activity)
* ⚖️ License: No license found
-* 🏷️ Latest Release: v0.21.0 (2026-02-12)
+* 🧪 Status: Experimental (no releases yet)
-=> showcase/hexai/image-1.png hexai screenshot
+=> showcase/ior/image-1.png ior screenshot
-Hexai is a Go-based AI integration tool designed primarily for the Helix editor that provides LSP (Language Server Protocol) powered AI features. It offers code auto-completion, AI-driven code actions, in-editor chat with LLMs, and a standalone CLI tool for direct LLM interaction. A standout feature is its ability to query multiple AI providers (OpenAI, OpenRouter, GitHub Copilot, Ollama) in parallel, allowing developers to compare responses side-by-side. It has enhanced capabilities for Go code understanding, such as generating unit tests from functions, while supporting other programming languages as well.
+I/O Riot NG is a Linux-only performance analysis tool that uses BPF (Berkeley Packet Filter) to trace synchronous I/O syscalls and measure their execution time. It captures stack traces during I/O operations and generates compressed output in a format compatible with Inferno FlameGraphs, allowing developers to visually identify performance bottlenecks caused by blocking I/O calls. This makes it particularly useful for diagnosing latency issues in applications where I/O operations are suspected of causing performance degradation.
-The project is implemented as an LSP server written in Go, with a TUI component built using Bubble Tea for the tmux-based code action runner (`hexai-tmux-action`). This architecture allows it to integrate seamlessly into LSP-compatible editors, with special focus on Helix + tmux workflows. The custom prompt feature lets developers use their preferred editor to craft prompts, making it flexible for various development workflows.
+=> showcase/ior/image-2.svg ior screenshot
-=> https://codeberg.org/snonux/hexai View on Codeberg
-=> https://github.com/snonux/hexai View on GitHub
+The tool is implemented in Go and C, leveraging libbpfgo for BPF interaction. It automatically generates BPF tracepoint handlers and Go type definitions from Linux kernel tracepoint data, attaches to syscall entry/exit points, and collects timing data with minimal overhead. The project is a modern successor to the original I/O Riot (which used SystemTap), offering better performance and easier deployment through BPF's built-in kernel support.
+
+=> https://codeberg.org/snonux/ior View on Codeberg
+=> https://github.com/snonux/ior View on GitHub
---
@@ -112,11 +114,11 @@ The project is implemented as an LSP server written in Go, with a TUI component
* 💻 Languages: Shell (58.9%), CSS (11.0%), Config (10.2%), TOML (10.1%), Ruby (8.4%), JSON (1.1%), INI (0.2%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 776
-* 📈 Lines of Code: 2960
-* 📄 Lines of Documentation: 653
-* 📅 Development Period: 2023-07-30 to 2026-02-21
-* 🏆 Score: 364.5 (combines code size and activity)
+* 📊 Commits: 783
+* 📈 Lines of Code: 2961
+* 📄 Lines of Documentation: 949
+* 📅 Development Period: 2023-07-30 to 2026-02-22
+* 🏆 Score: 427.7 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -130,7 +132,31 @@ The architecture is straightforward: config files live in subdirectories mirrori
---
-### 3. epimetheus
+### 3. hexai
+
+* 💻 Languages: Go (100.0%)
+* 📚 Documentation: Markdown (100.0%)
+* 📊 Commits: 343
+* 📈 Lines of Code: 29895
+* 📄 Lines of Documentation: 5508
+* 📅 Development Period: 2025-08-01 to 2026-02-22
+* 🏆 Score: 341.8 (combines code size and activity)
+* ⚖️ License: No license found
+* 🏷️ Latest Release: v0.21.0 (2026-02-12)
+
+
+=> showcase/hexai/image-1.png hexai screenshot
+
+Hexai is a Go-based AI integration tool designed primarily for the Helix editor that provides LSP (Language Server Protocol) powered AI features. It offers code auto-completion, AI-driven code actions, in-editor chat with LLMs, and a standalone CLI tool for direct LLM interaction. A standout feature is its ability to query multiple AI providers (OpenAI, OpenRouter, GitHub Copilot, Ollama) in parallel, allowing developers to compare responses side-by-side. It has enhanced capabilities for Go code understanding, such as generating unit tests from functions, while supporting other programming languages as well.
+
+The project is implemented as an LSP server written in Go, with a TUI component built using Bubble Tea for the tmux-based code action runner (`hexai-tmux-action`). This architecture allows it to integrate seamlessly into LSP-compatible editors, with special focus on Helix + tmux workflows. The custom prompt feature lets developers use their preferred editor to craft prompts, making it flexible for various development workflows.
+
+=> https://codeberg.org/snonux/hexai View on Codeberg
+=> https://github.com/snonux/hexai View on GitHub
+
+---
+
+### 4. epimetheus
* 💻 Languages: Go (85.2%), Shell (14.8%)
* 📚 Documentation: Markdown (100.0%)
@@ -138,7 +164,7 @@ The architecture is straightforward: config files live in subdirectories mirrori
* 📈 Lines of Code: 5199
* 📄 Lines of Documentation: 1734
* 📅 Development Period: 2026-02-07 to 2026-02-14
-* 🏆 Score: 314.0 (combines code size and activity)
+* 🏆 Score: 284.8 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -154,15 +180,15 @@ The architecture routes current data (<5 min old) through Pushgateway where Prom
---
-### 4. conf
+### 5. conf
-* 💻 Languages: YAML (80.7%), Perl (10.0%), Shell (6.1%), Python (2.3%), Docker (0.7%), Config (0.2%), HTML (0.1%)
+* 💻 Languages: YAML (80.7%), Perl (9.9%), Shell (6.0%), Python (2.3%), Docker (0.7%), Config (0.2%), HTML (0.1%)
* 📚 Documentation: Markdown (97.1%), Text (2.9%)
-* 📊 Commits: 785
-* 📈 Lines of Code: 19079
-* 📄 Lines of Documentation: 6585
-* 📅 Development Period: 2021-12-28 to 2026-02-08
-* 🏆 Score: 250.8 (combines code size and activity)
+* 📊 Commits: 791
+* 📈 Lines of Code: 19132
+* 📄 Lines of Documentation: 6572
+* 📅 Development Period: 2021-12-28 to 2026-02-15
+* 🏆 Score: 261.6 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -176,7 +202,7 @@ The project is organized into distinct subdirectories: `dotfiles/` contains shel
---
-### 5. foo.zone
+### 6. foo.zone
* 💻 Languages: XML (98.7%), Shell (1.0%), Go (0.3%)
* 📚 Documentation: Text (86.2%), Markdown (13.8%)
@@ -184,7 +210,7 @@ The project is organized into distinct subdirectories: `dotfiles/` contains shel
* 📈 Lines of Code: 18702
* 📄 Lines of Documentation: 174
* 📅 Development Period: 2021-04-29 to 2026-02-07
-* 🏆 Score: 215.8 (combines code size and activity)
+* 🏆 Score: 203.4 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -196,7 +222,7 @@ foo.zone: source code repository.
---
-### 6. scifi
+### 7. scifi
* 💻 Languages: JSON (35.9%), CSS (30.6%), JavaScript (29.6%), HTML (3.8%)
* 📚 Documentation: Markdown (100.0%)
@@ -204,7 +230,7 @@ foo.zone: source code repository.
* 📈 Lines of Code: 1664
* 📄 Lines of Documentation: 853
* 📅 Development Period: 2026-01-25 to 2026-01-27
-* 🏆 Score: 117.3 (combines code size and activity)
+* 🏆 Score: 112.3 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -218,7 +244,7 @@ The architecture keeps content separate from presentation: book metadata lives i
---
-### 7. log4jbench
+### 8. log4jbench
* 💻 Languages: Java (78.9%), XML (21.1%)
* 📚 Documentation: Markdown (100.0%)
@@ -226,7 +252,7 @@ The architecture keeps content separate from presentation: book metadata lives i
* 📈 Lines of Code: 774
* 📄 Lines of Documentation: 119
* 📅 Development Period: 2026-01-09 to 2026-01-09
-* 🏆 Score: 66.4 (combines code size and activity)
+* 🏆 Score: 64.6 (combines code size and activity)
* ⚖️ License: MIT
* 🧪 Status: Experimental (no releases yet)
@@ -240,17 +266,17 @@ The implementation uses a fat JAR built with Maven, requiring Java 17+. It's des
---
-### 8. gogios
+### 9. gogios
* 💻 Languages: Go (98.9%), JSON (0.6%), YAML (0.5%)
* 📚 Documentation: Markdown (94.9%), Text (5.1%)
-* 📊 Commits: 108
+* 📊 Commits: 109
* 📈 Lines of Code: 3875
* 📄 Lines of Documentation: 394
-* 📅 Development Period: 2023-04-17 to 2026-02-08
-* 🏆 Score: 33.3 (combines code size and activity)
+* 📅 Development Period: 2023-04-17 to 2026-02-16
+* 🏆 Score: 35.0 (combines code size and activity)
* ⚖️ License: Custom License
-* 🏷️ Latest Release: v1.4.0 (2026-02-08)
+* 🏷️ Latest Release: v1.4.1 (2026-02-16)
=> showcase/gogios/image-1.png gogios screenshot
@@ -264,7 +290,7 @@ The architecture is straightforward: JSON configuration defines checks (plugin p
---
-### 9. yoga
+### 10. yoga
* 💻 Languages: Go (66.1%), HTML (33.9%)
* 📚 Documentation: Markdown (100.0%)
@@ -272,7 +298,7 @@ The architecture is straightforward: JSON configuration defines checks (plugin p
* 📈 Lines of Code: 5921
* 📄 Lines of Documentation: 83
* 📅 Development Period: 2025-10-01 to 2026-01-28
-* 🏆 Score: 31.0 (combines code size and activity)
+* 🏆 Score: 30.7 (combines code size and activity)
* ⚖️ License: No license found
* 🏷️ Latest Release: v0.4.0 (2026-01-28)
@@ -288,7 +314,7 @@ The implementation follows clean Go architecture with domain logic organized und
---
-### 10. perc
+### 11. perc
* 💻 Languages: Go (100.0%)
* 📚 Documentation: Markdown (100.0%)
@@ -296,7 +322,7 @@ The implementation follows clean Go architecture with domain logic organized und
* 📈 Lines of Code: 452
* 📄 Lines of Documentation: 80
* 📅 Development Period: 2025-11-25 to 2025-11-25
-* 🏆 Score: 30.0 (combines code size and activity)
+* 🏆 Score: 29.6 (combines code size and activity)
* ⚖️ License: No license found
* 🏷️ Latest Release: v0.1.0 (2025-11-25)
@@ -310,7 +336,7 @@ The tool is built as a simple Go CLI application with a standard project layout
---
-### 11. totalrecall
+### 12. totalrecall
* 💻 Languages: Go (99.0%), Shell (0.5%), YAML (0.4%)
* 📚 Documentation: Markdown (99.5%), Text (0.5%)
@@ -318,7 +344,7 @@ The tool is built as a simple Go CLI application with a standard project layout
* 📈 Lines of Code: 13129
* 📄 Lines of Documentation: 377
* 📅 Development Period: 2025-07-14 to 2026-01-21
-* 🏆 Score: 26.1 (combines code size and activity)
+* 🏆 Score: 25.9 (combines code size and activity)
* ⚖️ License: MIT
* 🏷️ Latest Release: v0.8.0 (2026-01-21)
@@ -336,43 +362,17 @@ The project offers both a keyboard-driven GUI for interactive use and a CLI for
---
-### 12. ior
-
-* 💻 Languages: Go (63.2%), C (36.0%), C/C++ (0.8%)
-* 📚 Documentation: Markdown (79.3%), Text (20.7%)
-* 📊 Commits: 344
-* 📈 Lines of Code: 15784
-* 📄 Lines of Documentation: 2313
-* 📅 Development Period: 2024-01-18 to 2026-02-21
-* 🏆 Score: 20.9 (combines code size and activity)
-* ⚖️ License: No license found
-* 🧪 Status: Experimental (no releases yet)
-
-
-=> showcase/ior/image-1.png ior screenshot
-
-I/O Riot NG is a Linux-only performance analysis tool that uses BPF (Berkeley Packet Filter) to trace synchronous I/O syscalls and measure their execution time. It captures stack traces during I/O operations and generates compressed output in a format compatible with Inferno FlameGraphs, allowing developers to visually identify performance bottlenecks caused by blocking I/O calls. This makes it particularly useful for diagnosing latency issues in applications where I/O operations are suspected of causing performance degradation.
-
-=> showcase/ior/image-2.svg ior screenshot
-
-The tool is implemented in Go and C, leveraging libbpfgo for BPF interaction. It automatically generates BPF tracepoint handlers and Go type definitions from Linux kernel tracepoint data, attaches to syscall entry/exit points, and collects timing data with minimal overhead. The project is a modern successor to the original I/O Riot (which used SystemTap), offering better performance and easier deployment through BPF's built-in kernel support.
-
-=> https://codeberg.org/snonux/ior View on Codeberg
-=> https://github.com/snonux/ior View on GitHub
-
----
-
### 13. gitsyncer
-* 💻 Languages: Go (92.5%), Shell (7.1%), JSON (0.4%)
+* 💻 Languages: Go (92.6%), Shell (7.0%), JSON (0.4%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 117
-* 📈 Lines of Code: 10446
+* 📊 Commits: 120
+* 📈 Lines of Code: 10568
* 📄 Lines of Documentation: 2445
-* 📅 Development Period: 2025-06-23 to 2026-02-07
-* 🏆 Score: 20.7 (combines code size and activity)
+* 📅 Development Period: 2025-06-23 to 2026-02-22
+* 🏆 Score: 22.5 (combines code size and activity)
* ⚖️ License: BSD-2-Clause
-* 🏷️ Latest Release: v0.12.0 (2026-02-07)
+* 🏷️ Latest Release: v0.12.1 (2026-02-22)
GitSyncer is a Go-based CLI tool that automatically synchronizes git repositories across multiple hosting platforms (GitHub, Codeberg, SSH servers). It maintains all branches in sync bidirectionally, never deleting branches but automatically creating and updating them as needed. The tool excels at providing repository redundancy and backup, with special support for one-way SSH backups to private servers (like home NAS devices) that may be offline intermittently. It includes AI-powered features for generating release notes and project showcase documentation, plus automated weekly batch synchronization for hands-off maintenance.
@@ -392,7 +392,7 @@ The implementation uses a git remotes approach: it clones from one organization,
* 📈 Lines of Code: 6544
* 📄 Lines of Documentation: 254
* 📅 Development Period: 2025-06-19 to 2026-02-04
-* 🏆 Score: 17.9 (combines code size and activity)
+* 🏆 Score: 17.8 (combines code size and activity)
* ⚖️ License: BSD-2-Clause
* 🏷️ Latest Release: v0.11.0 (2026-02-04)
@@ -418,7 +418,7 @@ Under the hood, Task Samurai acts as a front-end wrapper that invokes the native
* 📈 Lines of Code: 1902
* 📄 Lines of Documentation: 423
* 📅 Development Period: 2023-01-02 to 2025-11-01
-* 🏆 Score: 17.8 (combines code size and activity)
+* 🏆 Score: 17.7 (combines code size and activity)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v0.2.0 (2025-10-21)
@@ -440,7 +440,7 @@ The implementation uses a modular Perl architecture with specialized components:
* 📈 Lines of Code: 1538
* 📄 Lines of Documentation: 99
* 📅 Development Period: 2025-06-25 to 2026-01-02
-* 🏆 Score: 16.1 (combines code size and activity)
+* 🏆 Score: 16.0 (combines code size and activity)
* ⚖️ License: MIT
* 🏷️ Latest Release: v0.3.0 (2026-01-02)
@@ -454,7 +454,33 @@ The architecture is straightforward: it's a Go-based CLI application that persis
---
-### 17. dtail
+### 17. gos
+
+* 💻 Languages: Go (99.5%), JSON (0.2%), Shell (0.2%)
+* 📚 Documentation: Markdown (100.0%)
+* 📊 Commits: 400
+* 📈 Lines of Code: 4143
+* 📄 Lines of Documentation: 477
+* 📅 Development Period: 2024-05-04 to 2026-02-17
+* 🏆 Score: 15.6 (combines code size and activity)
+* ⚖️ License: Custom License
+* 🏷️ Latest Release: v1.2.4 (2026-02-17)
+
+
+=> showcase/gos/image-1.png gos screenshot
+
+Gos is a command-line social media scheduling tool written in Go that serves as a self-hosted replacement for Buffer.com. It enables users to schedule and post messages to Mastodon and LinkedIn (plus a "Noop" pseudo-platform for tracking) through a simple file-based queueing system. Messages are created as text files in a designated directory (`~/.gosdir`), with optional tags embedded in filenames or content to control platform targeting, priority, and scheduling behavior. The tool addresses limitations of commercial services by offering unlimited posts, a scriptable CLI interface, and full user control without unwanted features like AI assistants.
+
+=> showcase/gos/image-2.png gos screenshot
+
+The implementation uses OAuth2 for LinkedIn authentication, stores configuration as JSON, and manages posts through a platform-specific database structure. Gos employs intelligent scheduling based on configurable weekly targets, lookback windows, pause periods between posts, and run intervals to prevent over-posting. It supports priority queuing, platform exclusion rules, dry-run testing, and can generate Gemini gemtext summaries of posted content. Built with Mage for automation, the tool integrates seamlessly into shell workflows and can be triggered on intervals to maintain a consistent posting cadence across platforms.
+
+=> https://codeberg.org/snonux/gos View on Codeberg
+=> https://github.com/snonux/gos View on GitHub
+
+---
+
+### 18. dtail
* 💻 Languages: Go (93.9%), JSON (2.8%), C (2.0%), Make (0.5%), C/C++ (0.3%), Config (0.2%), Shell (0.2%), Docker (0.1%)
* 📚 Documentation: Text (79.4%), Markdown (20.6%)
@@ -462,7 +488,7 @@ The architecture is straightforward: it's a Go-based CLI application that persis
* 📈 Lines of Code: 20091
* 📄 Lines of Documentation: 5674
* 📅 Development Period: 2020-01-09 to 2025-06-20
-* 🏆 Score: 15.4 (combines code size and activity)
+* 🏆 Score: 15.3 (combines code size and activity)
* ⚖️ License: Apache-2.0
* 🏷️ Latest Release: v4.3.3 (2024-08-23)
@@ -480,32 +506,6 @@ The architecture follows a client-server model where DTail servers run on target
---
-### 18. gos
-
-* 💻 Languages: Go (99.8%), JSON (0.2%)
-* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 399
-* 📈 Lines of Code: 4102
-* 📄 Lines of Documentation: 357
-* 📅 Development Period: 2024-05-04 to 2025-12-27
-* 🏆 Score: 14.6 (combines code size and activity)
-* ⚖️ License: Custom License
-* 🏷️ Latest Release: v1.2.3 (2026-01-31)
-
-
-=> showcase/gos/image-1.png gos screenshot
-
-Gos is a command-line social media scheduling tool written in Go that serves as a self-hosted replacement for Buffer.com. It enables users to schedule and post messages to Mastodon and LinkedIn (plus a "Noop" pseudo-platform for tracking) through a simple file-based queueing system. Messages are created as text files in a designated directory (`~/.gosdir`), with optional tags embedded in filenames or content to control platform targeting, priority, and scheduling behavior. The tool addresses limitations of commercial services by offering unlimited posts, a scriptable CLI interface, and full user control without unwanted features like AI assistants.
-
-=> showcase/gos/image-2.png gos screenshot
-
-The implementation uses OAuth2 for LinkedIn authentication, stores configuration as JSON, and manages posts through a platform-specific database structure. Gos employs intelligent scheduling based on configurable weekly targets, lookback windows, pause periods between posts, and run intervals to prevent over-posting. It supports priority queuing, platform exclusion rules, dry-run testing, and can generate Gemini gemtext summaries of posted content. Built with Mage for automation, the tool integrates seamlessly into shell workflows and can be triggered on intervals to maintain a consistent posting cadence across platforms.
-
-=> https://codeberg.org/snonux/gos View on Codeberg
-=> https://github.com/snonux/gos View on GitHub
-
----
-
### 19. ds-sim
* 💻 Languages: Java (98.9%), Shell (0.6%), CSS (0.5%)
@@ -514,7 +514,7 @@ The implementation uses OAuth2 for LinkedIn authentication, stores configuration
* 📈 Lines of Code: 25762
* 📄 Lines of Documentation: 3101
* 📅 Development Period: 2008-05-15 to 2025-06-27
-* 🏆 Score: 14.1 (combines code size and activity)
+* 🏆 Score: 14.0 (combines code size and activity)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -652,7 +652,7 @@ The implementation leverages Go's cross-compilation capabilities and Fyne's UI a
* 📈 Lines of Code: 33
* 📄 Lines of Documentation: 3
* 📅 Development Period: 2025-04-03 to 2025-04-03
-* 🏆 Score: 4.7 (combines code size and activity)
+* 🏆 Score: 4.6 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -875,11 +875,11 @@ The key advantage over traditional benchmarking tools is that it reproduces actu
* 💻 Languages: Perl (65.8%), Docker (34.2%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 19
+* 📊 Commits: 22
* 📈 Lines of Code: 149
-* 📄 Lines of Documentation: 15
-* 📅 Development Period: 2011-07-09 to 2026-02-03
-* 🏆 Score: 1.3 (combines code size and activity)
+* 📄 Lines of Documentation: 21
+* 📅 Development Period: 2011-07-09 to 2026-02-17
+* 🏆 Score: 1.5 (combines code size and activity)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -984,29 +984,7 @@ The tool is implemented around a hierarchical configuration system (`/etc/pingdo
---
-### 40. fype
-
-* 💻 Languages: C (71.8%), C/C++ (20.0%), HTML (6.3%), Make (1.8%)
-* 📚 Documentation: Text (65.1%), LaTeX (21.0%), Markdown (14.0%)
-* 📊 Commits: 107
-* 📈 Lines of Code: 9363
-* 📄 Lines of Documentation: 2713
-* 📅 Development Period: 2008-05-15 to 2026-02-20
-* 🏆 Score: 0.9 (combines code size and activity)
-* ⚖️ License: Custom License
-* 🧪 Status: Experimental (no releases yet)
-
-
-Fype is a 32-bit scripting language designed as a fun, AWK-inspired alternative with a simpler syntax. It supports variables with automatic type conversion, functions, loops, control structures, and built-in operations for math, I/O, and system calls. A notable feature is its support for "synonyms" (references/aliases to variables and functions), along with both procedures (using the caller's namespace) and functions (with lexical scoping). The language uses a straightforward syntax with single-character comments (#) and statement-based execution terminated by semicolons.
-
-The implementation uses a simple top-down parser with maximum lookahead of 1, interpreting code simultaneously as it parses, which means syntax errors are only caught at runtime. Written in C and compiled with GCC, it's designed for BSD systems (tested on FreeBSD 7.0) and uses NetBSD Make for building. The project is still unreleased and incomplete, but aims to eventually match AWK's capabilities while potentially adding modern features like function pointers and closures, though explicitly avoiding complexity like OOP, Unicode, or threading.
-
-=> https://codeberg.org/snonux/fype View on Codeberg
-=> https://github.com/snonux/fype View on GitHub
-
----
-
-### 41. xerl
+### 40. xerl
* 💻 Languages: Perl (98.3%), Config (1.2%), Make (0.5%)
* 📊 Commits: 670
@@ -1027,7 +1005,7 @@ The implementation follows strict OO Perl conventions with explicit typing and p
---
-### 42. ychat
+### 41. ychat
* 💻 Languages: C++ (49.9%), C/C++ (22.2%), Shell (20.6%), Perl (2.5%), HTML (1.9%), Config (1.8%), Make (0.9%), CSS (0.2%)
* 📚 Documentation: Text (100.0%)
@@ -1050,7 +1028,7 @@ The architecture emphasizes speed and scalability through several key design cho
---
-### 43. fapi
+### 42. fapi
* 💻 Languages: Python (96.6%), Make (3.1%), Config (0.3%)
* 📚 Documentation: Text (98.3%), Markdown (1.7%)
@@ -1072,7 +1050,7 @@ The tool is implemented in Python and depends on the bigsuds library (F5's iCont
---
-### 44. perl-c-fibonacci
+### 43. perl-c-fibonacci
* 💻 Languages: C (80.4%), Make (19.6%)
* 📚 Documentation: Text (100.0%)
@@ -1093,7 +1071,7 @@ perl-c-fibonacci: source code repository.
---
-### 45. netcalendar
+### 44. netcalendar
* 💻 Languages: Java (83.0%), HTML (12.9%), XML (3.0%), CSS (0.8%), Make (0.2%)
* 📚 Documentation: Text (89.7%), Markdown (10.3%)
@@ -1120,17 +1098,17 @@ The key feature is its intelligent color-coded event visualization system that h
---
-### 46. loadbars
+### 45. loadbars
* 💻 Languages: Perl (97.4%), Make (2.6%)
* 📚 Documentation: Text (100.0%)
-* 📊 Commits: 557
+* 📊 Commits: 575
* 📈 Lines of Code: 1828
* 📄 Lines of Documentation: 100
* 📅 Development Period: 2010-11-05 to 2015-05-23
* 🏆 Score: 0.7 (combines code size and activity)
* ⚖️ License: No license found
-* 🏷️ Latest Release: v0.9.0 (2026-02-14)
+* 🏷️ Latest Release: v0.11.1 (2026-02-17)
⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
@@ -1141,7 +1119,7 @@ loadbars: source code repository.
---
-### 47. gotop
+### 46. gotop
* 💻 Languages: Go (98.0%), Make (2.0%)
* 📚 Documentation: Markdown (50.0%), Text (50.0%)
@@ -1164,7 +1142,7 @@ The implementation uses a concurrent architecture with goroutines for data colle
---
-### 48. rubyfy
+### 47. rubyfy
* 💻 Languages: Ruby (98.5%), JSON (1.5%)
* 📚 Documentation: Markdown (100.0%)
@@ -1187,6 +1165,29 @@ The tool is implemented as a lightweight Ruby script that prioritizes simplicity
---
+### 48. fype
+
+* 💻 Languages: C (71.2%), C/C++ (20.7%), HTML (6.6%), Make (1.5%)
+* 📚 Documentation: Text (60.3%), LaTeX (39.7%)
+* 📊 Commits: 107
+* 📈 Lines of Code: 8954
+* 📄 Lines of Documentation: 1432
+* 📅 Development Period: 2008-05-15 to 2014-06-30
+* 🏆 Score: 0.7 (combines code size and activity)
+* ⚖️ License: Custom License
+* 🧪 Status: Experimental (no releases yet)
+
+⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
+
+Fype is a 32-bit scripting language designed as a fun, AWK-inspired alternative with a simpler syntax. It supports variables with automatic type conversion, functions, loops, control structures, and built-in operations for math, I/O, and system calls. A notable feature is its support for "synonyms" (references/aliases to variables and functions), along with both procedures (using the caller's namespace) and functions (with lexical scoping). The language uses a straightforward syntax with single-character comments (#) and statement-based execution terminated by semicolons.
+
+The implementation uses a simple top-down parser with maximum lookahead of 1, interpreting code simultaneously as it parses, which means syntax errors are only caught at runtime. Written in C and compiled with GCC, it's designed for BSD systems (tested on FreeBSD 7.0) and uses NetBSD Make for building. The project is still unreleased and incomplete, but aims to eventually match AWK's capabilities while potentially adding modern features like function pointers and closures, though explicitly avoiding complexity like OOP, Unicode, or threading.
+
+=> https://codeberg.org/snonux/fype View on Codeberg
+=> https://github.com/snonux/fype View on GitHub
+
+---
+
### 49. pwgrep
* 💻 Languages: Shell (85.0%), Make (15.0%)
diff --git a/about/showcase.gmi.tpl b/about/showcase.gmi.tpl
index 6ddec4cb..f0d73e26 100644
--- a/about/showcase.gmi.tpl
+++ b/about/showcase.gmi.tpl
@@ -1,6 +1,6 @@
# Project Showcase
-Generated on: 2026-02-21
+Generated on: 2026-02-22
This page showcases my side projects, providing an overview of what each project does, its technical implementation, and key metrics. Each project summary includes information about the programming languages used, development activity, and licensing. The projects are ranked by score, which combines project size and recent activity.
@@ -9,36 +9,38 @@ This page showcases my side projects, providing an overview of what each project
## Overall Statistics
* 📦 Total Projects: 62
-* 📊 Total Commits: 12,551
-* 📈 Total Lines of Code: 311,290
-* 📄 Total Lines of Documentation: 41,076
-* 💻 Languages: Go (36.4%), Java (13.2%), C++ (8.1%), C (6.3%), XML (6.2%), Shell (5.9%), Perl (5.6%), C/C++ (5.2%), YAML (5.1%), HTML (1.9%), Config (1.2%), Ruby (1.0%), HCL (0.9%), Python (0.7%), CSS (0.6%), Make (0.5%), JSON (0.4%), Haskell (0.2%), JavaScript (0.2%), Raku (0.1%), TOML (0.1%)
-* 📚 Documentation: Markdown (69.8%), Text (28.9%), LaTeX (1.4%)
+* 📊 Total Commits: 12,631
+* 📈 Total Lines of Code: 317,099
+* 📄 Total Lines of Documentation: 40,325
+* 💻 Languages: Go (37.7%), Java (12.9%), C++ (8.0%), C (6.1%), XML (6.1%), Shell (5.8%), Perl (5.5%), C/C++ (5.1%), YAML (5.0%), HTML (1.9%), Config (1.2%), Ruby (0.9%), HCL (0.9%), Python (0.6%), CSS (0.6%), Make (0.5%), JSON (0.4%), Haskell (0.2%), JavaScript (0.2%), Raku (0.1%), TOML (0.1%)
+* 📚 Documentation: Markdown (71.4%), Text (27.2%), LaTeX (1.4%)
* 🚀 Release Status: 39 released, 23 experimental (62.9% with releases, 37.1% experimental)
## Projects
-### 1. hexai
+### 1. ior
-* 💻 Languages: Go (100.0%)
-* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 342
-* 📈 Lines of Code: 29895
-* 📄 Lines of Documentation: 5502
-* 📅 Development Period: 2025-08-01 to 2026-02-13
-* 🏆 Score: 365.1 (combines code size and activity)
+* 💻 Languages: Go (73.1%), C (26.3%), C/C++ (0.6%)
+* 📚 Documentation: Markdown (80.3%), Text (19.7%)
+* 📊 Commits: 384
+* 📈 Lines of Code: 21785
+* 📄 Lines of Documentation: 2428
+* 📅 Development Period: 2024-01-18 to 2026-02-21
+* 🏆 Score: 2219.6 (combines code size and activity)
* ⚖️ License: No license found
-* 🏷️ Latest Release: v0.21.0 (2026-02-12)
+* 🧪 Status: Experimental (no releases yet)
-=> showcase/hexai/image-1.png hexai screenshot
+=> showcase/ior/image-1.png ior screenshot
-Hexai is a Go-based AI integration tool designed primarily for the Helix editor that provides LSP (Language Server Protocol) powered AI features. It offers code auto-completion, AI-driven code actions, in-editor chat with LLMs, and a standalone CLI tool for direct LLM interaction. A standout feature is its ability to query multiple AI providers (OpenAI, OpenRouter, GitHub Copilot, Ollama) in parallel, allowing developers to compare responses side-by-side. It has enhanced capabilities for Go code understanding, such as generating unit tests from functions, while supporting other programming languages as well.
+I/O Riot NG is a Linux-only performance analysis tool that uses BPF (Berkeley Packet Filter) to trace synchronous I/O syscalls and measure their execution time. It captures stack traces during I/O operations and generates compressed output in a format compatible with Inferno FlameGraphs, allowing developers to visually identify performance bottlenecks caused by blocking I/O calls. This makes it particularly useful for diagnosing latency issues in applications where I/O operations are suspected of causing performance degradation.
-The project is implemented as an LSP server written in Go, with a TUI component built using Bubble Tea for the tmux-based code action runner (`hexai-tmux-action`). This architecture allows it to integrate seamlessly into LSP-compatible editors, with special focus on Helix + tmux workflows. The custom prompt feature lets developers use their preferred editor to craft prompts, making it flexible for various development workflows.
+=> showcase/ior/image-2.svg ior screenshot
-=> https://codeberg.org/snonux/hexai View on Codeberg
-=> https://github.com/snonux/hexai View on GitHub
+The tool is implemented in Go and C, leveraging libbpfgo for BPF interaction. It automatically generates BPF tracepoint handlers and Go type definitions from Linux kernel tracepoint data, attaches to syscall entry/exit points, and collects timing data with minimal overhead. The project is a modern successor to the original I/O Riot (which used SystemTap), offering better performance and easier deployment through BPF's built-in kernel support.
+
+=> https://codeberg.org/snonux/ior View on Codeberg
+=> https://github.com/snonux/ior View on GitHub
---
@@ -46,11 +48,11 @@ The project is implemented as an LSP server written in Go, with a TUI component
* 💻 Languages: Shell (58.9%), CSS (11.0%), Config (10.2%), TOML (10.1%), Ruby (8.4%), JSON (1.1%), INI (0.2%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 776
-* 📈 Lines of Code: 2960
-* 📄 Lines of Documentation: 653
-* 📅 Development Period: 2023-07-30 to 2026-02-21
-* 🏆 Score: 364.5 (combines code size and activity)
+* 📊 Commits: 783
+* 📈 Lines of Code: 2961
+* 📄 Lines of Documentation: 949
+* 📅 Development Period: 2023-07-30 to 2026-02-22
+* 🏆 Score: 427.7 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -64,7 +66,31 @@ The architecture is straightforward: config files live in subdirectories mirrori
---
-### 3. epimetheus
+### 3. hexai
+
+* 💻 Languages: Go (100.0%)
+* 📚 Documentation: Markdown (100.0%)
+* 📊 Commits: 343
+* 📈 Lines of Code: 29895
+* 📄 Lines of Documentation: 5508
+* 📅 Development Period: 2025-08-01 to 2026-02-22
+* 🏆 Score: 341.8 (combines code size and activity)
+* ⚖️ License: No license found
+* 🏷️ Latest Release: v0.21.0 (2026-02-12)
+
+
+=> showcase/hexai/image-1.png hexai screenshot
+
+Hexai is a Go-based AI integration tool designed primarily for the Helix editor that provides LSP (Language Server Protocol) powered AI features. It offers code auto-completion, AI-driven code actions, in-editor chat with LLMs, and a standalone CLI tool for direct LLM interaction. A standout feature is its ability to query multiple AI providers (OpenAI, OpenRouter, GitHub Copilot, Ollama) in parallel, allowing developers to compare responses side-by-side. It has enhanced capabilities for Go code understanding, such as generating unit tests from functions, while supporting other programming languages as well.
+
+The project is implemented as an LSP server written in Go, with a TUI component built using Bubble Tea for the tmux-based code action runner (`hexai-tmux-action`). This architecture allows it to integrate seamlessly into LSP-compatible editors, with special focus on Helix + tmux workflows. The custom prompt feature lets developers use their preferred editor to craft prompts, making it flexible for various development workflows.
+
+=> https://codeberg.org/snonux/hexai View on Codeberg
+=> https://github.com/snonux/hexai View on GitHub
+
+---
+
+### 4. epimetheus
* 💻 Languages: Go (85.2%), Shell (14.8%)
* 📚 Documentation: Markdown (100.0%)
@@ -72,7 +98,7 @@ The architecture is straightforward: config files live in subdirectories mirrori
* 📈 Lines of Code: 5199
* 📄 Lines of Documentation: 1734
* 📅 Development Period: 2026-02-07 to 2026-02-14
-* 🏆 Score: 314.0 (combines code size and activity)
+* 🏆 Score: 284.8 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -88,15 +114,15 @@ The architecture routes current data (<5 min old) through Pushgateway where Prom
---
-### 4. conf
+### 5. conf
-* 💻 Languages: YAML (80.7%), Perl (10.0%), Shell (6.1%), Python (2.3%), Docker (0.7%), Config (0.2%), HTML (0.1%)
+* 💻 Languages: YAML (80.7%), Perl (9.9%), Shell (6.0%), Python (2.3%), Docker (0.7%), Config (0.2%), HTML (0.1%)
* 📚 Documentation: Markdown (97.1%), Text (2.9%)
-* 📊 Commits: 785
-* 📈 Lines of Code: 19079
-* 📄 Lines of Documentation: 6585
-* 📅 Development Period: 2021-12-28 to 2026-02-08
-* 🏆 Score: 250.8 (combines code size and activity)
+* 📊 Commits: 791
+* 📈 Lines of Code: 19132
+* 📄 Lines of Documentation: 6572
+* 📅 Development Period: 2021-12-28 to 2026-02-15
+* 🏆 Score: 261.6 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -110,7 +136,7 @@ The project is organized into distinct subdirectories: `dotfiles/` contains shel
---
-### 5. foo.zone
+### 6. foo.zone
* 💻 Languages: XML (98.7%), Shell (1.0%), Go (0.3%)
* 📚 Documentation: Text (86.2%), Markdown (13.8%)
@@ -118,7 +144,7 @@ The project is organized into distinct subdirectories: `dotfiles/` contains shel
* 📈 Lines of Code: 18702
* 📄 Lines of Documentation: 174
* 📅 Development Period: 2021-04-29 to 2026-02-07
-* 🏆 Score: 215.8 (combines code size and activity)
+* 🏆 Score: 203.4 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -130,7 +156,7 @@ foo.zone: source code repository.
---
-### 6. scifi
+### 7. scifi
* 💻 Languages: JSON (35.9%), CSS (30.6%), JavaScript (29.6%), HTML (3.8%)
* 📚 Documentation: Markdown (100.0%)
@@ -138,7 +164,7 @@ foo.zone: source code repository.
* 📈 Lines of Code: 1664
* 📄 Lines of Documentation: 853
* 📅 Development Period: 2026-01-25 to 2026-01-27
-* 🏆 Score: 117.3 (combines code size and activity)
+* 🏆 Score: 112.3 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -152,7 +178,7 @@ The architecture keeps content separate from presentation: book metadata lives i
---
-### 7. log4jbench
+### 8. log4jbench
* 💻 Languages: Java (78.9%), XML (21.1%)
* 📚 Documentation: Markdown (100.0%)
@@ -160,7 +186,7 @@ The architecture keeps content separate from presentation: book metadata lives i
* 📈 Lines of Code: 774
* 📄 Lines of Documentation: 119
* 📅 Development Period: 2026-01-09 to 2026-01-09
-* 🏆 Score: 66.4 (combines code size and activity)
+* 🏆 Score: 64.6 (combines code size and activity)
* ⚖️ License: MIT
* 🧪 Status: Experimental (no releases yet)
@@ -174,17 +200,17 @@ The implementation uses a fat JAR built with Maven, requiring Java 17+. It's des
---
-### 8. gogios
+### 9. gogios
* 💻 Languages: Go (98.9%), JSON (0.6%), YAML (0.5%)
* 📚 Documentation: Markdown (94.9%), Text (5.1%)
-* 📊 Commits: 108
+* 📊 Commits: 109
* 📈 Lines of Code: 3875
* 📄 Lines of Documentation: 394
-* 📅 Development Period: 2023-04-17 to 2026-02-08
-* 🏆 Score: 33.3 (combines code size and activity)
+* 📅 Development Period: 2023-04-17 to 2026-02-16
+* 🏆 Score: 35.0 (combines code size and activity)
* ⚖️ License: Custom License
-* 🏷️ Latest Release: v1.4.0 (2026-02-08)
+* 🏷️ Latest Release: v1.4.1 (2026-02-16)
=> showcase/gogios/image-1.png gogios screenshot
@@ -198,7 +224,7 @@ The architecture is straightforward: JSON configuration defines checks (plugin p
---
-### 9. yoga
+### 10. yoga
* 💻 Languages: Go (66.1%), HTML (33.9%)
* 📚 Documentation: Markdown (100.0%)
@@ -206,7 +232,7 @@ The architecture is straightforward: JSON configuration defines checks (plugin p
* 📈 Lines of Code: 5921
* 📄 Lines of Documentation: 83
* 📅 Development Period: 2025-10-01 to 2026-01-28
-* 🏆 Score: 31.0 (combines code size and activity)
+* 🏆 Score: 30.7 (combines code size and activity)
* ⚖️ License: No license found
* 🏷️ Latest Release: v0.4.0 (2026-01-28)
@@ -222,7 +248,7 @@ The implementation follows clean Go architecture with domain logic organized und
---
-### 10. perc
+### 11. perc
* 💻 Languages: Go (100.0%)
* 📚 Documentation: Markdown (100.0%)
@@ -230,7 +256,7 @@ The implementation follows clean Go architecture with domain logic organized und
* 📈 Lines of Code: 452
* 📄 Lines of Documentation: 80
* 📅 Development Period: 2025-11-25 to 2025-11-25
-* 🏆 Score: 30.0 (combines code size and activity)
+* 🏆 Score: 29.6 (combines code size and activity)
* ⚖️ License: No license found
* 🏷️ Latest Release: v0.1.0 (2025-11-25)
@@ -244,7 +270,7 @@ The tool is built as a simple Go CLI application with a standard project layout
---
-### 11. totalrecall
+### 12. totalrecall
* 💻 Languages: Go (99.0%), Shell (0.5%), YAML (0.4%)
* 📚 Documentation: Markdown (99.5%), Text (0.5%)
@@ -252,7 +278,7 @@ The tool is built as a simple Go CLI application with a standard project layout
* 📈 Lines of Code: 13129
* 📄 Lines of Documentation: 377
* 📅 Development Period: 2025-07-14 to 2026-01-21
-* 🏆 Score: 26.1 (combines code size and activity)
+* 🏆 Score: 25.9 (combines code size and activity)
* ⚖️ License: MIT
* 🏷️ Latest Release: v0.8.0 (2026-01-21)
@@ -270,43 +296,17 @@ The project offers both a keyboard-driven GUI for interactive use and a CLI for
---
-### 12. ior
-
-* 💻 Languages: Go (63.2%), C (36.0%), C/C++ (0.8%)
-* 📚 Documentation: Markdown (79.3%), Text (20.7%)
-* 📊 Commits: 344
-* 📈 Lines of Code: 15784
-* 📄 Lines of Documentation: 2313
-* 📅 Development Period: 2024-01-18 to 2026-02-21
-* 🏆 Score: 20.9 (combines code size and activity)
-* ⚖️ License: No license found
-* 🧪 Status: Experimental (no releases yet)
-
-
-=> showcase/ior/image-1.png ior screenshot
-
-I/O Riot NG is a Linux-only performance analysis tool that uses BPF (Berkeley Packet Filter) to trace synchronous I/O syscalls and measure their execution time. It captures stack traces during I/O operations and generates compressed output in a format compatible with Inferno FlameGraphs, allowing developers to visually identify performance bottlenecks caused by blocking I/O calls. This makes it particularly useful for diagnosing latency issues in applications where I/O operations are suspected of causing performance degradation.
-
-=> showcase/ior/image-2.svg ior screenshot
-
-The tool is implemented in Go and C, leveraging libbpfgo for BPF interaction. It automatically generates BPF tracepoint handlers and Go type definitions from Linux kernel tracepoint data, attaches to syscall entry/exit points, and collects timing data with minimal overhead. The project is a modern successor to the original I/O Riot (which used SystemTap), offering better performance and easier deployment through BPF's built-in kernel support.
-
-=> https://codeberg.org/snonux/ior View on Codeberg
-=> https://github.com/snonux/ior View on GitHub
-
----
-
### 13. gitsyncer
-* 💻 Languages: Go (92.5%), Shell (7.1%), JSON (0.4%)
+* 💻 Languages: Go (92.6%), Shell (7.0%), JSON (0.4%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 117
-* 📈 Lines of Code: 10446
+* 📊 Commits: 120
+* 📈 Lines of Code: 10568
* 📄 Lines of Documentation: 2445
-* 📅 Development Period: 2025-06-23 to 2026-02-07
-* 🏆 Score: 20.7 (combines code size and activity)
+* 📅 Development Period: 2025-06-23 to 2026-02-22
+* 🏆 Score: 22.5 (combines code size and activity)
* ⚖️ License: BSD-2-Clause
-* 🏷️ Latest Release: v0.12.0 (2026-02-07)
+* 🏷️ Latest Release: v0.12.1 (2026-02-22)
GitSyncer is a Go-based CLI tool that automatically synchronizes git repositories across multiple hosting platforms (GitHub, Codeberg, SSH servers). It maintains all branches in sync bidirectionally, never deleting branches but automatically creating and updating them as needed. The tool excels at providing repository redundancy and backup, with special support for one-way SSH backups to private servers (like home NAS devices) that may be offline intermittently. It includes AI-powered features for generating release notes and project showcase documentation, plus automated weekly batch synchronization for hands-off maintenance.
@@ -326,7 +326,7 @@ The implementation uses a git remotes approach: it clones from one organization,
* 📈 Lines of Code: 6544
* 📄 Lines of Documentation: 254
* 📅 Development Period: 2025-06-19 to 2026-02-04
-* 🏆 Score: 17.9 (combines code size and activity)
+* 🏆 Score: 17.8 (combines code size and activity)
* ⚖️ License: BSD-2-Clause
* 🏷️ Latest Release: v0.11.0 (2026-02-04)
@@ -352,7 +352,7 @@ Under the hood, Task Samurai acts as a front-end wrapper that invokes the native
* 📈 Lines of Code: 1902
* 📄 Lines of Documentation: 423
* 📅 Development Period: 2023-01-02 to 2025-11-01
-* 🏆 Score: 17.8 (combines code size and activity)
+* 🏆 Score: 17.7 (combines code size and activity)
* ⚖️ License: Custom License
* 🏷️ Latest Release: v0.2.0 (2025-10-21)
@@ -374,7 +374,7 @@ The implementation uses a modular Perl architecture with specialized components:
* 📈 Lines of Code: 1538
* 📄 Lines of Documentation: 99
* 📅 Development Period: 2025-06-25 to 2026-01-02
-* 🏆 Score: 16.1 (combines code size and activity)
+* 🏆 Score: 16.0 (combines code size and activity)
* ⚖️ License: MIT
* 🏷️ Latest Release: v0.3.0 (2026-01-02)
@@ -388,7 +388,33 @@ The architecture is straightforward: it's a Go-based CLI application that persis
---
-### 17. dtail
+### 17. gos
+
+* 💻 Languages: Go (99.5%), JSON (0.2%), Shell (0.2%)
+* 📚 Documentation: Markdown (100.0%)
+* 📊 Commits: 400
+* 📈 Lines of Code: 4143
+* 📄 Lines of Documentation: 477
+* 📅 Development Period: 2024-05-04 to 2026-02-17
+* 🏆 Score: 15.6 (combines code size and activity)
+* ⚖️ License: Custom License
+* 🏷️ Latest Release: v1.2.4 (2026-02-17)
+
+
+=> showcase/gos/image-1.png gos screenshot
+
+Gos is a command-line social media scheduling tool written in Go that serves as a self-hosted replacement for Buffer.com. It enables users to schedule and post messages to Mastodon and LinkedIn (plus a "Noop" pseudo-platform for tracking) through a simple file-based queueing system. Messages are created as text files in a designated directory (`~/.gosdir`), with optional tags embedded in filenames or content to control platform targeting, priority, and scheduling behavior. The tool addresses limitations of commercial services by offering unlimited posts, a scriptable CLI interface, and full user control without unwanted features like AI assistants.
+
+=> showcase/gos/image-2.png gos screenshot
+
+The implementation uses OAuth2 for LinkedIn authentication, stores configuration as JSON, and manages posts through a platform-specific database structure. Gos employs intelligent scheduling based on configurable weekly targets, lookback windows, pause periods between posts, and run intervals to prevent over-posting. It supports priority queuing, platform exclusion rules, dry-run testing, and can generate Gemini gemtext summaries of posted content. Built with Mage for automation, the tool integrates seamlessly into shell workflows and can be triggered on intervals to maintain a consistent posting cadence across platforms.
+
+=> https://codeberg.org/snonux/gos View on Codeberg
+=> https://github.com/snonux/gos View on GitHub
+
+---
+
+### 18. dtail
* 💻 Languages: Go (93.9%), JSON (2.8%), C (2.0%), Make (0.5%), C/C++ (0.3%), Config (0.2%), Shell (0.2%), Docker (0.1%)
* 📚 Documentation: Text (79.4%), Markdown (20.6%)
@@ -396,7 +422,7 @@ The architecture is straightforward: it's a Go-based CLI application that persis
* 📈 Lines of Code: 20091
* 📄 Lines of Documentation: 5674
* 📅 Development Period: 2020-01-09 to 2025-06-20
-* 🏆 Score: 15.4 (combines code size and activity)
+* 🏆 Score: 15.3 (combines code size and activity)
* ⚖️ License: Apache-2.0
* 🏷️ Latest Release: v4.3.3 (2024-08-23)
@@ -414,32 +440,6 @@ The architecture follows a client-server model where DTail servers run on target
---
-### 18. gos
-
-* 💻 Languages: Go (99.8%), JSON (0.2%)
-* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 399
-* 📈 Lines of Code: 4102
-* 📄 Lines of Documentation: 357
-* 📅 Development Period: 2024-05-04 to 2025-12-27
-* 🏆 Score: 14.6 (combines code size and activity)
-* ⚖️ License: Custom License
-* 🏷️ Latest Release: v1.2.3 (2026-01-31)
-
-
-=> showcase/gos/image-1.png gos screenshot
-
-Gos is a command-line social media scheduling tool written in Go that serves as a self-hosted replacement for Buffer.com. It enables users to schedule and post messages to Mastodon and LinkedIn (plus a "Noop" pseudo-platform for tracking) through a simple file-based queueing system. Messages are created as text files in a designated directory (`~/.gosdir`), with optional tags embedded in filenames or content to control platform targeting, priority, and scheduling behavior. The tool addresses limitations of commercial services by offering unlimited posts, a scriptable CLI interface, and full user control without unwanted features like AI assistants.
-
-=> showcase/gos/image-2.png gos screenshot
-
-The implementation uses OAuth2 for LinkedIn authentication, stores configuration as JSON, and manages posts through a platform-specific database structure. Gos employs intelligent scheduling based on configurable weekly targets, lookback windows, pause periods between posts, and run intervals to prevent over-posting. It supports priority queuing, platform exclusion rules, dry-run testing, and can generate Gemini gemtext summaries of posted content. Built with Mage for automation, the tool integrates seamlessly into shell workflows and can be triggered on intervals to maintain a consistent posting cadence across platforms.
-
-=> https://codeberg.org/snonux/gos View on Codeberg
-=> https://github.com/snonux/gos View on GitHub
-
----
-
### 19. ds-sim
* 💻 Languages: Java (98.9%), Shell (0.6%), CSS (0.5%)
@@ -448,7 +448,7 @@ The implementation uses OAuth2 for LinkedIn authentication, stores configuration
* 📈 Lines of Code: 25762
* 📄 Lines of Documentation: 3101
* 📅 Development Period: 2008-05-15 to 2025-06-27
-* 🏆 Score: 14.1 (combines code size and activity)
+* 🏆 Score: 14.0 (combines code size and activity)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -586,7 +586,7 @@ The implementation leverages Go's cross-compilation capabilities and Fyne's UI a
* 📈 Lines of Code: 33
* 📄 Lines of Documentation: 3
* 📅 Development Period: 2025-04-03 to 2025-04-03
-* 🏆 Score: 4.7 (combines code size and activity)
+* 🏆 Score: 4.6 (combines code size and activity)
* ⚖️ License: No license found
* 🧪 Status: Experimental (no releases yet)
@@ -809,11 +809,11 @@ The key advantage over traditional benchmarking tools is that it reproduces actu
* 💻 Languages: Perl (65.8%), Docker (34.2%)
* 📚 Documentation: Markdown (100.0%)
-* 📊 Commits: 19
+* 📊 Commits: 22
* 📈 Lines of Code: 149
-* 📄 Lines of Documentation: 15
-* 📅 Development Period: 2011-07-09 to 2026-02-03
-* 🏆 Score: 1.3 (combines code size and activity)
+* 📄 Lines of Documentation: 21
+* 📅 Development Period: 2011-07-09 to 2026-02-17
+* 🏆 Score: 1.5 (combines code size and activity)
* ⚖️ License: Custom License
* 🧪 Status: Experimental (no releases yet)
@@ -918,29 +918,7 @@ The tool is implemented around a hierarchical configuration system (`/etc/pingdo
---
-### 40. fype
-
-* 💻 Languages: C (71.8%), C/C++ (20.0%), HTML (6.3%), Make (1.8%)
-* 📚 Documentation: Text (65.1%), LaTeX (21.0%), Markdown (14.0%)
-* 📊 Commits: 107
-* 📈 Lines of Code: 9363
-* 📄 Lines of Documentation: 2713
-* 📅 Development Period: 2008-05-15 to 2026-02-20
-* 🏆 Score: 0.9 (combines code size and activity)
-* ⚖️ License: Custom License
-* 🧪 Status: Experimental (no releases yet)
-
-
-Fype is a 32-bit scripting language designed as a fun, AWK-inspired alternative with a simpler syntax. It supports variables with automatic type conversion, functions, loops, control structures, and built-in operations for math, I/O, and system calls. A notable feature is its support for "synonyms" (references/aliases to variables and functions), along with both procedures (using the caller's namespace) and functions (with lexical scoping). The language uses a straightforward syntax with single-character comments (#) and statement-based execution terminated by semicolons.
-
-The implementation uses a simple top-down parser with maximum lookahead of 1, interpreting code simultaneously as it parses, which means syntax errors are only caught at runtime. Written in C and compiled with GCC, it's designed for BSD systems (tested on FreeBSD 7.0) and uses NetBSD Make for building. The project is still unreleased and incomplete, but aims to eventually match AWK's capabilities while potentially adding modern features like function pointers and closures, though explicitly avoiding complexity like OOP, Unicode, or threading.
-
-=> https://codeberg.org/snonux/fype View on Codeberg
-=> https://github.com/snonux/fype View on GitHub
-
----
-
-### 41. xerl
+### 40. xerl
* 💻 Languages: Perl (98.3%), Config (1.2%), Make (0.5%)
* 📊 Commits: 670
@@ -961,7 +939,7 @@ The implementation follows strict OO Perl conventions with explicit typing and p
---
-### 42. ychat
+### 41. ychat
* 💻 Languages: C++ (49.9%), C/C++ (22.2%), Shell (20.6%), Perl (2.5%), HTML (1.9%), Config (1.8%), Make (0.9%), CSS (0.2%)
* 📚 Documentation: Text (100.0%)
@@ -984,7 +962,7 @@ The architecture emphasizes speed and scalability through several key design cho
---
-### 43. fapi
+### 42. fapi
* 💻 Languages: Python (96.6%), Make (3.1%), Config (0.3%)
* 📚 Documentation: Text (98.3%), Markdown (1.7%)
@@ -1006,7 +984,7 @@ The tool is implemented in Python and depends on the bigsuds library (F5's iCont
---
-### 44. perl-c-fibonacci
+### 43. perl-c-fibonacci
* 💻 Languages: C (80.4%), Make (19.6%)
* 📚 Documentation: Text (100.0%)
@@ -1027,7 +1005,7 @@ perl-c-fibonacci: source code repository.
---
-### 45. netcalendar
+### 44. netcalendar
* 💻 Languages: Java (83.0%), HTML (12.9%), XML (3.0%), CSS (0.8%), Make (0.2%)
* 📚 Documentation: Text (89.7%), Markdown (10.3%)
@@ -1054,17 +1032,17 @@ The key feature is its intelligent color-coded event visualization system that h
---
-### 46. loadbars
+### 45. loadbars
* 💻 Languages: Perl (97.4%), Make (2.6%)
* 📚 Documentation: Text (100.0%)
-* 📊 Commits: 557
+* 📊 Commits: 575
* 📈 Lines of Code: 1828
* 📄 Lines of Documentation: 100
* 📅 Development Period: 2010-11-05 to 2015-05-23
* 🏆 Score: 0.7 (combines code size and activity)
* ⚖️ License: No license found
-* 🏷️ Latest Release: v0.9.0 (2026-02-14)
+* 🏷️ Latest Release: v0.11.1 (2026-02-17)
⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
@@ -1075,7 +1053,7 @@ loadbars: source code repository.
---
-### 47. gotop
+### 46. gotop
* 💻 Languages: Go (98.0%), Make (2.0%)
* 📚 Documentation: Markdown (50.0%), Text (50.0%)
@@ -1098,7 +1076,7 @@ The implementation uses a concurrent architecture with goroutines for data colle
---
-### 48. rubyfy
+### 47. rubyfy
* 💻 Languages: Ruby (98.5%), JSON (1.5%)
* 📚 Documentation: Markdown (100.0%)
@@ -1121,6 +1099,29 @@ The tool is implemented as a lightweight Ruby script that prioritizes simplicity
---
+### 48. fype
+
+* 💻 Languages: C (71.2%), C/C++ (20.7%), HTML (6.6%), Make (1.5%)
+* 📚 Documentation: Text (60.3%), LaTeX (39.7%)
+* 📊 Commits: 107
+* 📈 Lines of Code: 8954
+* 📄 Lines of Documentation: 1432
+* 📅 Development Period: 2008-05-15 to 2014-06-30
+* 🏆 Score: 0.7 (combines code size and activity)
+* ⚖️ License: Custom License
+* 🧪 Status: Experimental (no releases yet)
+
+⚠️ **Notice**: This project appears to be finished, obsolete, or no longer maintained. Last meaningful activity was over 2 years ago. Use at your own risk.
+
+Fype is a 32-bit scripting language designed as a fun, AWK-inspired alternative with a simpler syntax. It supports variables with automatic type conversion, functions, loops, control structures, and built-in operations for math, I/O, and system calls. A notable feature is its support for "synonyms" (references/aliases to variables and functions), along with both procedures (using the caller's namespace) and functions (with lexical scoping). The language uses a straightforward syntax with single-character comments (#) and statement-based execution terminated by semicolons.
+
+The implementation uses a simple top-down parser with maximum lookahead of 1, interpreting code simultaneously as it parses, which means syntax errors are only caught at runtime. Written in C and compiled with GCC, it's designed for BSD systems (tested on FreeBSD 7.0) and uses NetBSD Make for building. The project is still unreleased and incomplete, but aims to eventually match AWK's capabilities while potentially adding modern features like function pointers and closures, though explicitly avoiding complexity like OOP, Unicode, or threading.
+
+=> https://codeberg.org/snonux/fype View on Codeberg
+=> https://github.com/snonux/fype View on GitHub
+
+---
+
### 49. pwgrep
* 💻 Languages: Shell (85.0%), Make (15.0%)
diff --git a/about/showcase/debroid/image-1.png b/about/showcase/debroid/image-1.png
index d7cec344..3a28b1cb 100644
--- a/about/showcase/debroid/image-1.png
+++ b/about/showcase/debroid/image-1.png
@@ -54,7 +54,7 @@
<script type="application/json" id="client-env">{"locale":"en","featureFlags":["a11y_status_checks_ruleset","action_yml_language_service","actions_custom_images_public_preview_visibility","actions_custom_images_storage_billing_ui_visibility","actions_image_version_event","actions_workflow_language_service","alternate_user_config_repo","api_insights_show_missing_data_banner","arianotify_comprehensive_migration","batch_suggested_changes","codespaces_prebuild_region_target_update","coding_agent_model_selection","coding_agent_model_selection_all_skus","copilot_3p_agent_hovercards","copilot_agent_sessions_alive_updates","copilot_agent_snippy","copilot_agent_task_list_v2","copilot_agent_task_submit_with_modifier","copilot_agent_tasks_btn_repo","copilot_api_agentic_issue_marshal_yaml","copilot_ask_mode_dropdown","copilot_chat_attach_multiple_images","copilot_chat_clear_model_selection_for_default_change","copilot_chat_enable_tool_call_logs","copilot_chat_file_redirect","copilot_chat_input_commands","copilot_chat_opening_thread_switch","copilot_chat_reduce_quota_checks","copilot_chat_repository_picker","copilot_chat_search_bar_redirect","copilot_chat_selection_attachments","copilot_chat_vision_in_claude","copilot_chat_vision_preview_gate","copilot_coding_agent_task_response","copilot_custom_copilots","copilot_custom_copilots_feature_preview","copilot_duplicate_thread","copilot_extensions_hide_in_dotcom_chat","copilot_extensions_removal_on_marketplace","copilot_features_sql_server_logo","copilot_features_zed_logo","copilot_file_block_ref_matching","copilot_ftp_hyperspace_upgrade_prompt","copilot_icebreakers_experiment_dashboard","copilot_icebreakers_experiment_hyperspace","copilot_immersive_embedded","copilot_immersive_job_result_preview","copilot_immersive_layout_routes","copilot_immersive_structured_model_picker","copilot_immersive_task_hyperlinking","copilot_immersive_task_within_chat_thread","copilot_mc_cli_resume_any_users_task","copilot_mission_control_use_task_name","copilot_org_policy_page_focus_mode","copilot_redirect_header_button_to_agents","copilot_share_active_subthread","copilot_spaces_ga","copilot_spaces_individual_policies_ga","copilot_spaces_pagination","copilot_spark_empty_state","copilot_spark_handle_nil_friendly_name","copilot_stable_conversation_view","copilot_swe_agent_hide_model_picker_if_only_auto","copilot_swe_agent_use_subagents","copilot_unconfigured_is_inherited","custom_instructions_file_references","custom_properties_consolidate_default_value_input","dashboard_lists_max_age_filter","dashboard_universe_2025_feedback_dialog","enterprise_ai_controls","failbot_report_error_react_apps_on_page","flex_cta_groups_mvp","global_nav_react","hyperspace_2025_logged_out_batch_1","hyperspace_2025_logged_out_batch_2","initial_per_page_pagination_updates","issue_fields_global_search","issue_fields_report_usage","issue_fields_timeline_events","issues_cca_assign_actor_with_agent","issues_dashboard_inp_optimization","issues_expanded_file_types","issues_index_semantic_search","issues_lazy_load_comment_box_suggestions","issues_react_auto_retry_on_error","issues_react_bots_timeline_pagination","issues_react_chrome_container_query_fix","issues_react_defer_hot_cache_preheating","issues_react_deferred_list_data","issues_react_hot_cache","issues_react_low_quality_comment_warning","issues_react_prohibit_title_fallback","issues_react_safari_scroll_preservation","issues_react_use_turbo_for_cross_repo_navigation","landing_pages_ninetailed","landing_pages_web_vitals_tracking","lifecycle_label_name_updates","marketing_pages_search_explore_provider","memex_default_issue_create_repository","memex_display_button_config_menu","memex_grouped_by_edit_route","memex_live_update_hovercard","memex_mwl_filter_field_delimiter","mission_control_retry_on_401","mission_control_use_body_html","oauth_authorize_clickjacking_protection","open_agent_session_in_vscode_insiders","open_agent_session_in_vscode_stable","primer_brand_next","primer_react_css_has_selector_perf","projects_assignee_max_limit","prs_conversations_react","react_quality_profiling","repos_allow_finder_filters_rollout","repos_relevance_page","ruleset_deletion_confirmation","sample_network_conn_type","session_logs_ungroup_reasoning_text","site_calculator_actions_2025","site_features_copilot_universe","site_homepage_collaborate_video","spark_prompt_secret_scanning","spark_server_connection_status","suppress_automated_browser_vitals","suppress_non_representative_vitals","viewscreen_sandbox","webp_support","workbench_store_readonly"],"copilotApiOverrideUrl":"https://api.githubcopilot.com"}</script>
<script crossorigin="anonymous" type="application/javascript" src="https://github.githubassets.com/assets/high-contrast-cookie-2c87d5c76bc5d1e0.js"></script>
-<script crossorigin="anonymous" type="application/javascript" src="https://github.githubassets.com/assets/wp-runtime-826e1f078bede7e7.js" defer="defer"></script>
+<script crossorigin="anonymous" type="application/javascript" src="https://github.githubassets.com/assets/wp-runtime-90e04992d1db2c94.js" defer="defer"></script>
<script crossorigin="anonymous" type="application/javascript" src="https://github.githubassets.com/assets/19762-203ba93c6a6f3066.js" defer="defer"></script>
<script crossorigin="anonymous" type="application/javascript" src="https://github.githubassets.com/assets/49863-8861e351482cb073.js" defer="defer"></script>
<script crossorigin="anonymous" type="application/javascript" src="https://github.githubassets.com/assets/64220-5924ae3b3c473575.js" defer="defer"></script>
@@ -98,13 +98,13 @@
<meta name="route-pattern" content="/:user_id/:repository/blob/*name(/*path)" data-turbo-transient>
<meta name="route-controller" content="blob" data-turbo-transient>
<meta name="route-action" content="show" data-turbo-transient>
- <meta name="fetch-nonce" content="v2:8624b6e5-a58b-5e72-a90e-7876584e8ffd">
+ <meta name="fetch-nonce" content="v2:fffee6a3-e93a-7c15-e5d1-eda7b6aff1ab">
<meta name="current-catalog-service-hash" content="f3abb0cc802f3d7b95fc8762b94bdcb13bf39634c40c357301c4aa1d67a256fb">
- <meta name="request-id" content="8D68:3F663A:A3D63A0:839BA48:6999840B" data-pjax-transient="true"/><meta name="html-safe-nonce" content="55d67e81931ca07505267c4dae00a5aed62cf2bc19476fe4eb91be0ccfc54d78" data-pjax-transient="true"/><meta name="visitor-payload" content="eyJyZWZlcnJlciI6IiIsInJlcXVlc3RfaWQiOiI4RDY4OjNGNjYzQTpBM0Q2M0EwOjgzOUJBNDg6Njk5OTg0MEIiLCJ2aXNpdG9yX2lkIjoiNjMzNzcxOTcxNDQ0NjkzNTA1MSIsInJlZ2lvbl9lZGdlIjoiZnJhIiwicmVnaW9uX3JlbmRlciI6ImZyYSJ9" data-pjax-transient="true"/><meta name="visitor-hmac" content="668e36cab510e63c951178211edd0fd5a13d3eacd6d9971f909bd9bd59bd9f4c" data-pjax-transient="true"/>
+ <meta name="request-id" content="DFCA:164DFC:17582338:127EF5DB:699B1DF8" data-pjax-transient="true"/><meta name="html-safe-nonce" content="4cb4f4fc207451ffe443a5e6fe94bcde45043512abe08ec46c3e86ec4a747d10" data-pjax-transient="true"/><meta name="visitor-payload" content="eyJyZWZlcnJlciI6IiIsInJlcXVlc3RfaWQiOiJERkNBOjE2NERGQzoxNzU4MjMzODoxMjdFRjVEQjo2OTlCMURGOCIsInZpc2l0b3JfaWQiOiIyMzI3MTgzOTI2MDkwMjEwODA4IiwicmVnaW9uX2VkZ2UiOiJmcmEiLCJyZWdpb25fcmVuZGVyIjoiZnJhIn0=" data-pjax-transient="true"/><meta name="visitor-hmac" content="11065452a802d0101b6baa1cdf5d97330e18a2e2c3b38491a2b0ed7dcdb4f684" data-pjax-transient="true"/>
@@ -210,7 +210,7 @@
<meta name="browser-errors-url" content="https://api.github.com/_private/browser/errors">
- <meta name="release" content="1e9ddf6952a4b738d6e480c258c7af9917702457">
+ <meta name="release" content="02ca89815a9cad62d8552002b860a416c0d294bc">
<meta name="ui-target" content="full">
<link rel="mask-icon" href="https://github.githubassets.com/assets/pinned-octocat-093da3e6fa40.svg" color="#000000">
@@ -310,10 +310,10 @@
</a>
<div class="AppHeader-appearanceSettings">
<react-partial-anchor>
- <button data-target="react-partial-anchor.anchor" id="icon-button-3810bd7d-c57a-4042-881d-36f41eac2a55" aria-labelledby="tooltip-cac1e8ac-8c6d-4e3e-be5b-3ee59758be85" type="button" disabled="disabled" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium AppHeader-button HeaderMenu-link border cursor-wait"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-sliders Button-visual">
+ <button data-target="react-partial-anchor.anchor" id="icon-button-b2313cd2-e80a-426d-83aa-549edfcbb2fd" aria-labelledby="tooltip-f446e2ac-2723-40a7-8afc-e456762ba538" type="button" disabled="disabled" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium AppHeader-button HeaderMenu-link border cursor-wait"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-sliders Button-visual">
<path d="M15 2.75a.75.75 0 0 1-.75.75h-4a.75.75 0 0 1 0-1.5h4a.75.75 0 0 1 .75.75Zm-8.5.75v1.25a.75.75 0 0 0 1.5 0v-4a.75.75 0 0 0-1.5 0V2H1.75a.75.75 0 0 0 0 1.5H6.5Zm1.25 5.25a.75.75 0 0 0 0-1.5h-6a.75.75 0 0 0 0 1.5h6ZM15 8a.75.75 0 0 1-.75.75H11.5V10a.75.75 0 1 1-1.5 0V6a.75.75 0 0 1 1.5 0v1.25h2.75A.75.75 0 0 1 15 8Zm-9 5.25v-2a.75.75 0 0 0-1.5 0v1.25H1.75a.75.75 0 0 0 0 1.5H4.5v1.25a.75.75 0 0 0 1.5 0v-2Zm9 0a.75.75 0 0 1-.75.75h-6a.75.75 0 0 1 0-1.5h6a.75.75 0 0 1 .75.75Z"></path>
</svg>
-</button><tool-tip id="tooltip-cac1e8ac-8c6d-4e3e-be5b-3ee59758be85" for="icon-button-3810bd7d-c57a-4042-881d-36f41eac2a55" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Appearance settings</tool-tip>
+</button><tool-tip id="tooltip-f446e2ac-2723-40a7-8afc-e456762ba538" for="icon-button-b2313cd2-e80a-426d-83aa-549edfcbb2fd" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Appearance settings</tool-tip>
<template data-target="react-partial-anchor.template">
<link crossorigin="anonymous" media="all" rel="stylesheet" href="https://github.githubassets.com/assets/primer-react-css.257816c5781f334a.module.css" />
@@ -361,7 +361,7 @@
-<qbsearch-input class="search-input" data-scope="owner:buetow" data-custom-scopes-path="/search/custom_scopes" data-delete-custom-scopes-csrf="pZNKFGWyfH6hPdNw-x3W_oSfUMWjEDQp7SQzPHkPlPNU2Q5nKrW3ByIr-byXCn9Tmyxz8maEvX7EdpzF15vDVA" data-max-custom-scopes="10" data-header-redesign-enabled="false" data-initial-value="" data-blackbird-suggestions-path="/search/suggestions" data-jump-to-suggestions-path="/_graphql/GetSuggestedNavigationDestinations" data-current-repository="" data-current-org="" data-current-owner="" data-logged-in="false" data-copilot-chat-enabled="false" data-nl-search-enabled="false" data-retain-scroll-position="true">
+<qbsearch-input class="search-input" data-scope="owner:buetow" data-custom-scopes-path="/search/custom_scopes" data-delete-custom-scopes-csrf="r8__dlvfPY92gcGDQG-7MaNYaausZr9KRJ7MhBtej082X7XBpOok8nNDIqAYelf7roGANuF78jkqsD3yzS6Fhw" data-max-custom-scopes="10" data-header-redesign-enabled="false" data-initial-value="" data-blackbird-suggestions-path="/search/suggestions" data-jump-to-suggestions-path="/_graphql/GetSuggestedNavigationDestinations" data-current-repository="" data-current-org="" data-current-owner="" data-logged-in="false" data-copilot-chat-enabled="false" data-nl-search-enabled="false" data-retain-scroll-position="true">
<div
class="search-input-container search-with-dialog position-relative d-flex flex-row flex-items-center mr-4 rounded"
data-action="click:qbsearch-input#searchInputContainerClicked"
@@ -425,7 +425,7 @@
></div>
<div class="QueryBuilder-InputWrapper">
<div aria-hidden="true" class="QueryBuilder-Sizer" data-target="query-builder.sizer"></div>
- <input id="query-builder-test" name="query-builder-test" value="" autocomplete="off" type="text" role="combobox" spellcheck="false" aria-expanded="false" aria-describedby="validation-120bdb6f-6ddb-46b7-a894-783c4c7aedd7" data-target="query-builder.input" data-action="
+ <input id="query-builder-test" name="query-builder-test" value="" autocomplete="off" type="text" role="combobox" spellcheck="false" aria-expanded="false" aria-describedby="validation-1c7dcb4d-b4b1-4ac5-936b-74237917ddf3" data-target="query-builder.input" data-action="
input:query-builder#inputChange
blur:query-builder#inputBlur
keydown:query-builder#inputKeydown
@@ -666,7 +666,7 @@
></ul>
</div>
- <div class="FormControl-inlineValidation" id="validation-120bdb6f-6ddb-46b7-a894-783c4c7aedd7" hidden="hidden">
+ <div class="FormControl-inlineValidation" id="validation-1c7dcb4d-b4b1-4ac5-936b-74237917ddf3" hidden="hidden">
<span class="FormControl-inlineValidation--visual">
<svg aria-hidden="true" height="12" viewBox="0 0 12 12" version="1.1" width="12" data-view-component="true" class="octicon octicon-alert-fill">
<path d="M4.855.708c.5-.896 1.79-.896 2.29 0l4.675 8.351a1.312 1.312 0 0 1-1.146 1.954H1.33A1.313 1.313 0 0 1 .183 9.058ZM7 7V3H5v4Zm-1 3a1 1 0 1 0 0-2 1 1 0 0 0 0 2Z"></path>
@@ -707,7 +707,7 @@
</div>
<scrollable-region data-labelled-by="feedback-dialog-title">
- <div data-view-component="true" class="Overlay-body"> <!-- '"` --><!-- </textarea></xmp> --></option></form><form id="code-search-feedback-form" data-turbo="false" action="/search/feedback" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="GWLy8eVouXdz6HY+k8wno7VnXnOp0MYeF1BxWUxaDFTRh5rU89g3U98cxpzYGCywipXp+2EfGHDR8y68a26HTw==" />
+ <div data-view-component="true" class="Overlay-body"> <!-- '"` --><!-- </textarea></xmp> --></option></form><form id="code-search-feedback-form" data-turbo="false" action="/search/feedback" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="lFeMrAyU3TMWKqMyhZFSCKujNtJy+tc6C5YYynzAzSPmBrbtuC2gG6aew1NGJ8l7BphbLoZQvPahR3/aWvarZQ==" />
<p>We read every piece of feedback, and take your input very seriously.</p>
<textarea name="feedback" class="form-control width-full mb-2" style="height: 120px" id="feedback"></textarea>
<input name="include_email" id="include_email" aria-label="Include my email address so I can be contacted" class="form-control mr-2" type="checkbox">
@@ -745,7 +745,7 @@
<div data-view-component="true" class="Overlay-body"> <div data-target="custom-scopes.customScopesModalDialogFlash"></div>
<div hidden class="create-custom-scope-form" data-target="custom-scopes.createCustomScopeForm">
- <!-- '"` --><!-- </textarea></xmp> --></option></form><form id="custom-scopes-dialog-form" data-turbo="false" action="/search/custom_scopes" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="zlZJ7C6O0CWDZTBt7B5qqqSNWSIDQJaowlGfcFecCckF9fu52QGLssmW1SvBApsbIVMUWAKAxAvFf+IACc/0BA==" />
+ <!-- '"` --><!-- </textarea></xmp> --></option></form><form id="custom-scopes-dialog-form" data-turbo="false" action="/search/custom_scopes" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="trl4T40Y6jpv+y23HB7xYjuZ4qcsDlwMGkSP6gytEowDyle1WGegwFhQFDd5BVdCUZVii9+6pxXSutm1c3wSdQ==" />
<div data-target="custom-scopes.customScopesModalDialogFlash"></div>
<input type="hidden" id="custom_scope_id" name="custom_scope_id" data-target="custom-scopes.customScopesIdField">
@@ -763,7 +763,7 @@
placeholder="github-ruby"
required
maxlength="50">
- <input type="hidden" data-csrf="true" value="jRIcRYoCu3pZxVmP7GfCeo05Vet/nabQY500faaLUDNrn9WtZwwBrMiGf1pBif/E+SPFvJbekLbb5+p0DSCTSw==" />
+ <input type="hidden" data-csrf="true" value="o3QVjgGx31d2TNyQc+ja9vSUdPbzizymhXP7Pw/JTc+ZT5m5+IHlOpQ4fBSp5PcF2mZXMA4VUXy9ehBorIbHKQ==" />
</auto-check>
</div>
@@ -818,7 +818,7 @@
<h4 data-view-component="true" class="color-fg-default mb-2"> Sign in to GitHub
</h4>
-<!-- '"` --><!-- </textarea></xmp> --></option></form><form data-turbo="false" action="/session" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="1QcTcEijHrIIQo0koksHxg39ZzTl1PamKfw7KhxJsFQXba7qjFnrjy4GJddcUO6EeIOdzbLrJ8MVe7tMkB+9Ug==" /> <input type="hidden" name="add_account" id="add_account" autocomplete="off" class="form-control" />
+<!-- '"` --><!-- </textarea></xmp> --></option></form><form data-turbo="false" action="/session" accept-charset="UTF-8" method="post"><input type="hidden" data-csrf="true" name="authenticity_token" value="NiESScOmu7qQB/yFYTt/DplpCdFLLQcFn2Y4S+V78RhBGPEkxmNfMoUhXIkx40jeccLkpF1y5ze9+duz3P3+dg==" /> <input type="hidden" name="add_account" id="add_account" autocomplete="off" class="form-control" />
<label for="login_field">
Username or email address
@@ -840,9 +840,9 @@
<input type="hidden" name="allow_signup" id="allow_signup" autocomplete="off" class="form-control" />
<input type="hidden" name="client_id" id="client_id" autocomplete="off" class="form-control" />
<input type="hidden" name="integration" id="integration" autocomplete="off" class="form-control" />
-<input class="form-control" type="text" name="required_field_ae7b" hidden="hidden" />
-<input class="form-control" type="hidden" name="timestamp" value="1771668491818" />
-<input class="form-control" type="hidden" name="timestamp_secret" value="b74339c98fa1bd3a315c5ec7409d08bfee509d2a8052614110f7dcf9c07afb6c" />
+<input class="form-control" type="text" name="required_field_8c73" hidden="hidden" />
+<input class="form-control" type="hidden" name="timestamp" value="1771773433036" />
+<input class="form-control" type="hidden" name="timestamp_secret" value="a96a0e2fd4ac35da12fb3b57bf0650b99050f9e5300d70cbcd212be2676fc0b7" />
<input type="submit" name="commit" value="Sign in" class="btn btn-primary btn-block js-sign-in-button" data-disable-with="Signing in…" data-signin-label="Sign in" data-sso-label="Sign in with your identity provider" development="false" disable-emu-sso="false" />
@@ -869,10 +869,10 @@
<div class="AppHeader-appearanceSettings">
<react-partial-anchor>
- <button data-target="react-partial-anchor.anchor" id="icon-button-cfa359c5-084f-4167-9940-1210260b9312" aria-labelledby="tooltip-30e63f7a-0397-4df1-b470-86ba843af58c" type="button" disabled="disabled" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium AppHeader-button HeaderMenu-link border cursor-wait"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-sliders Button-visual">
+ <button data-target="react-partial-anchor.anchor" id="icon-button-e503a1ee-832a-4a8c-90a0-36b650aa2549" aria-labelledby="tooltip-84c8c49d-36cb-4c39-9071-c357301d04d7" type="button" disabled="disabled" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium AppHeader-button HeaderMenu-link border cursor-wait"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-sliders Button-visual">
<path d="M15 2.75a.75.75 0 0 1-.75.75h-4a.75.75 0 0 1 0-1.5h4a.75.75 0 0 1 .75.75Zm-8.5.75v1.25a.75.75 0 0 0 1.5 0v-4a.75.75 0 0 0-1.5 0V2H1.75a.75.75 0 0 0 0 1.5H6.5Zm1.25 5.25a.75.75 0 0 0 0-1.5h-6a.75.75 0 0 0 0 1.5h6ZM15 8a.75.75 0 0 1-.75.75H11.5V10a.75.75 0 1 1-1.5 0V6a.75.75 0 0 1 1.5 0v1.25h2.75A.75.75 0 0 1 15 8Zm-9 5.25v-2a.75.75 0 0 0-1.5 0v1.25H1.75a.75.75 0 0 0 0 1.5H4.5v1.25a.75.75 0 0 0 1.5 0v-2Zm9 0a.75.75 0 0 1-.75.75h-6a.75.75 0 0 1 0-1.5h6a.75.75 0 0 1 .75.75Z"></path>
</svg>
-</button><tool-tip id="tooltip-30e63f7a-0397-4df1-b470-86ba843af58c" for="icon-button-cfa359c5-084f-4167-9940-1210260b9312" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Appearance settings</tool-tip>
+</button><tool-tip id="tooltip-84c8c49d-36cb-4c39-9071-c357301d04d7" for="icon-button-e503a1ee-832a-4a8c-90a0-36b650aa2549" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Appearance settings</tool-tip>
<template data-target="react-partial-anchor.template">
<link crossorigin="anonymous" media="all" rel="stylesheet" href="https://github.githubassets.com/assets/primer-react-css.257816c5781f334a.module.css" />
@@ -910,10 +910,10 @@
<span class="js-stale-session-flash-signed-out" hidden>You signed out in another tab or window. <a class="Link--inTextBlock" href="">Reload</a> to refresh your session.</span>
<span class="js-stale-session-flash-switched" hidden>You switched accounts on another tab or window. <a class="Link--inTextBlock" href="">Reload</a> to refresh your session.</span>
- <button id="icon-button-de44b306-0da6-49b9-9ac3-e236296acae2" aria-labelledby="tooltip-6b6f45b2-91af-4443-ab42-2ad8980bf008" type="button" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium flash-close js-flash-close"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-x Button-visual">
+ <button id="icon-button-f2b797a4-d1f8-4f89-8714-4b38c66a033b" aria-labelledby="tooltip-7f55372e-e470-4795-9634-ed49affce0fd" type="button" data-view-component="true" class="Button Button--iconOnly Button--invisible Button--medium flash-close js-flash-close"> <svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-x Button-visual">
<path d="M3.72 3.72a.75.75 0 0 1 1.06 0L8 6.94l3.22-3.22a.749.749 0 0 1 1.275.326.749.749 0 0 1-.215.734L9.06 8l3.22 3.22a.749.749 0 0 1-.326 1.275.749.749 0 0 1-.734-.215L8 9.06l-3.22 3.22a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042L6.94 8 3.72 4.78a.75.75 0 0 1 0-1.06Z"></path>
</svg>
-</button><tool-tip id="tooltip-6b6f45b2-91af-4443-ab42-2ad8980bf008" for="icon-button-de44b306-0da6-49b9-9ac3-e236296acae2" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Dismiss alert</tool-tip>
+</button><tool-tip id="tooltip-7f55372e-e470-4795-9634-ed49affce0fd" for="icon-button-f2b797a4-d1f8-4f89-8714-4b38c66a033b" popover="manual" data-direction="s" data-type="label" data-view-component="true" class="sr-only position-absolute">Dismiss alert</tool-tip>
diff --git a/gemfeed/DRAFT-taskwarrior-autonomous-agent-loop.gmi b/gemfeed/DRAFT-taskwarrior-autonomous-agent-loop.gmi
deleted file mode 100644
index 5ac418d8..00000000
--- a/gemfeed/DRAFT-taskwarrior-autonomous-agent-loop.gmi
+++ /dev/null
@@ -1,477 +0,0 @@
-# Taskwarrior as an autonomous AI agent loop: 48 tasks in one day
-
-> Published at 2026-02-21T23:11:13+02:00
-
-=> ./taskwarrior-autonomous-agent/ior-flamegraph.png Example ior flamegraph showing I/O syscall activity by process, file path, and tracepoint
-
-I let Ampcode autonomously complete 48 Taskwarrior tasks on my eBPF project in a single day. The agent picked up one task after another — implemented, self-reviewed, spawned sub-agent reviews, addressed comments, committed, and moved on — all without me intervening. Here is how the setup works, what the project is about, and the full skill that drives the loop.
-
-=> https://ampcode.com Ampcode — the AI coding agent used for this project
-
-## Table of Contents
-
-* ⇢ Taskwarrior as an autonomous AI agent loop: 48 tasks in one day
-* ⇢ ⇢ What is ior and what does it do
-* ⇢ ⇢ ⇢ What is a syscall
-* ⇢ ⇢ ⇢ What is eBPF
-* ⇢ ⇢ ⇢ What ior traces and why
-* ⇢ ⇢ The problem: writing a full test suite by hand
-* ⇢ ⇢ Before and after
-* ⇢ ⇢ How the project-taskwarrior skill works
-* ⇢ ⇢ ⇢ SKILL.md — the entry point
-* ⇢ ⇢ ⇢ 00-context.md — project scoping and global rules
-* ⇢ ⇢ ⇢ 1-create-task.md — creating tasks with full context
-* ⇢ ⇢ ⇢ 2-start-task.md — fresh context per task
-* ⇢ ⇢ ⇢ 3-complete-task.md — the quality gate
-* ⇢ ⇢ ⇢ 4-annotate-update-task.md — progress tracking
-* ⇢ ⇢ ⇢ 5-review-overview-tasks.md — picking the next task
-* ⇢ ⇢ The reflection and review loop
-* ⇢ ⇢ Code review: human spot-check at the end
-* ⇢ ⇢ Measurable results
-* ⇢ ⇢ A real bug found by the review loop
-* ⇢ ⇢ Gotchas and lessons learned
-* ⇢ ⇢ ⇢ Cost
-* ⇢ ⇢ ⇢ Syscall wrappers on amd64
-* ⇢ ⇢ ⇢ Task granularity matters
-* ⇢ ⇢ How to replicate this
-
-## What is ior and what does it do
-
-I/O Riot NG (ior) is a Linux-only tool that traces synchronous I/O system calls in real time and produces flamegraphs showing which processes spend time on which files with which syscalls. It is written in Go and C, using eBPF via libbpfgo. It is the spiritual successor of an older project of mine called I/O Riot, which was based on SystemTap and C.
-
-=> ./taskwarrior-autonomous-agent/ior-logo.png I/O Riot NG logo
-
-=> https://codeberg.org/snonux/ior I/O Riot NG on Codeberg
-=> https://codeberg.org/snonux/ioriot The original I/O Riot (SystemTap)
-
-At the top of the blog post you see an example flamegraph produced by ior. The x-axis shows sample count (how frequent each I/O operation is), and the stack from bottom to top shows process ID, file path, and syscall tracepoint. You can immediately see which processes hammer which files with which syscalls.
-
-### What is a syscall
-
-A syscall (system call) is the interface between a user-space program and the Linux kernel. When a program wants to do anything that touches hardware or shared resources — open a file, read from a socket, write to disk, create a directory, check file permissions — it cannot do it directly. User-space programs run in an unprivileged CPU mode and have no access to hardware. They must ask the kernel by making a syscall.
-
-For example, when a program calls `open("/etc/passwd", O_RDONLY)`, it triggers the `openat` syscall. The CPU switches from user mode to kernel mode, the kernel validates the request, locates the file on disk, allocates a file descriptor, and returns it to the program — or returns an error code like ENOENT if the file does not exist. Every file operation, every network packet, every process fork goes through syscalls. They are the fundamental boundary between "your code" and "the operating system."
-
-There are hundreds of syscalls in Linux. The I/O-related ones that ior traces include:
-
-* `openat`, `creat`, `open_by_handle_at` — opening files
-* `read`, `write`, `pread64`, `pwrite64`, `readv`, `writev` — reading and writing data
-* `close`, `close_range` — closing file descriptors
-* `dup`, `dup2`, `dup3` — duplicating file descriptors
-* `fcntl` — manipulating file descriptor properties
-* `rename`, `renameat`, `renameat2` — renaming files
-* `link`, `linkat`, `symlink`, `symlinkat`, `readlinkat` — creating and reading links
-* `unlink`, `unlinkat`, `rmdir` — removing files and directories
-* `mkdir`, `mkdirat`, `chdir`, `getdents64` — directory operations
-* `stat`, `fstat`, `lstat`, `newfstatat`, `statx`, `access`, `faccessat` — file metadata
-* `fsync`, `fdatasync`, `sync`, `sync_file_range` — flushing data to disk
-* `truncate`, `ftruncate` — resizing files
-* `io_uring_setup`, `io_uring_enter`, `io_uring_register` — async I/O
-
-### What is eBPF
-
-eBPF (extended Berkeley Packet Filter) is a technology in the Linux kernel that lets you run sandboxed programs inside the kernel without changing kernel source code or loading kernel modules. Originally designed for network packet filtering, it has grown into a general-purpose in-kernel virtual machine.
-
-With eBPF, you write small C programs that the kernel verifies for safety (no infinite loops, no out-of-bounds access, no crashing the kernel) and then runs at de-facto native speed in a VM inside of the Linux Kernel. These programs can attach to tracepoints — predefined instrumentation points in the kernel that fire whenever a specific event occurs, such as a syscall being entered or exited.
-
-ior uses eBPF to attach to the entry and exit tracepoints of every I/O-related syscall. When any process on the system calls `openat`, for example, the kernel fires the `sys_enter_openat` tracepoint, ior's BPF program captures the filename, PID, thread ID, and timestamp, and sends that data to user-space via a ring buffer. When the syscall returns, the `sys_exit_openat` tracepoint fires, and ior captures the return value and duration. This happens with near-zero overhead because the BPF program runs inside the kernel — there is no context switch to user-space for each event.
-
-### What ior traces and why
-
-ior pairs up syscall enter and exit events, tracks which file descriptors map to which file paths, and aggregates everything into a data structure that can be serialized to a compressed `.ior.zst` file or rendered as a flamegraph. The flamegraph shows a hierarchy of PID, file path, and syscall tracepoint, with the width proportional to how often or how long each combination occurs.
-
-This is useful for diagnosing I/O bottlenecks: you can see at a glance that process 5171 is spending most of its time writing to `/sys/fs/cgroup/memory.stat`, or that your database is doing thousands of `fsync` calls per second on its WAL file. Traditional tools like `strace` can show you this too, but `strace` uses ptrace which has significant overhead and slows down the traced process. eBPF-based tracing is orders of magnitude faster.
-
-## The problem: writing a full test suite by hand
-
-The ior project needed a comprehensive test suite at two levels:
-
-* Unit tests in `internal/eventloop_test.go` — these simulate raw BPF tracepoint data (byte slices), feed them into the event loop, and verify that enter/exit events are correctly paired, file descriptors are tracked, comm names are propagated, and filters work. No BPF, no kernel, no root required.
-* Integration tests in `integrationtests/` — these launch a real `ioworkload` binary that performs actual syscalls, start ior with real BPF tracing against that process, wait for it to finish, and then parse the resulting `.ior.zst` file to verify that the expected tracepoints were captured. These require root and a running kernel with BPF support.
-
-Both levels needed happy-path tests (does it work correctly?) and negative tests (does it handle errors like ENOENT, EBADF, EEXIST, EINVAL correctly?). Across 13 syscall categories, that is a lot of test code — roughly 93 scenarios, each with its own workload implementation and test assertions. Having the LLM to instruct each of those tasks would have taken days and writing all of this by hand would take months.
-
-## Before and after
-
-Before I set up the Taskwarrior skill, my workflow with Ampcode looked like this: I would manually review the agent's output, then instruct it what to do next. One task at a time, constant babysitting. The agent had no memory of what was done or what was next. Context would degrade as the conversation grew longer.
-
-After: I front-loaded about 48 tasks into Taskwarrior with detailed descriptions and file references (Ampcode itself helped here to create the tasks as well), then told Ampcode a single instruction: "complete this task, then automatically proceed to the next ready +integrationtests task by handing off with fresh context." It ran for about 6 hours autonomously. I reviewed the commits over coffee.
-
-The key difference is that Taskwarrior acts as persistent memory and a work queue. The agent does not need to remember what it did — the task list tells it what is done and what is next. Each task hands off to a fresh Ampcode thread, so there is no context window degradation. Ampcode's handoff mechanism — where one thread spawns a new one with a goal description — maps perfectly onto Taskwarrior's task-by-task workflow.
-
-## How the project-taskwarrior skill works
-
-```
- ┌──────────────────────────────────────────────────┐
- │ │
- │ task add pro:ior "implement open_test.go" +agent │
- │ task add pro:ior "implement close_test.go" +agent│
- │ task add pro:ior "add negative tests" +agent │
- │ ... × 48 │
- │ │
- │ ┌─────────┐ ┌──────────┐ ┌──────────┐ │
- │ │ Agent │ ─▶│ Self- │ ─▶│ Sub-agent│ │
- │ │ works │ │ review │ │ review │ │
- │ └─────────┘ └──────────┘ └──────────┘ │
- │ │ │ │ │
- │ │ fix │ fix │ │
- │ │◀─────────────┘◀─────────────┘ │
- │ │ │
- │ ▼ │
- │ git commit + push │
- │ task <id> done │
- │ ──▶ hand off to next task (fresh context) │
- │ │
- └──────────────────────────────────────────────────┘
-```
-
-The skill lives in `~/.agents/skills/project-taskwarrior/` and consists of a `SKILL.md` entry point plus six markdown files — one per action. The agent loads only the files it needs for the current action, so it does not waste context on instructions it does not need right now.
-
-### SKILL.md — the entry point
-
-Every Ampcode skill has a `SKILL.md` with YAML frontmatter (name, description, trigger phrases) and an overview. This is what the agent sees first when it loads the skill:
-
-```
----
-name: project-taskwarrior
-description: "Manage Taskwarrior tasks scoped to the current git
- project. Use when asked to list, add, start, complete, annotate,
- or organize tasks for the project. Triggers on: tasks, todo,
- task list, pick next task, what's next."
----
-
-# Project Taskwarrior
-
-Taskwarrior tasks are scoped to the current git repository.
-Load only the files you need for the current action so the whole
-skill does not need to be in context.
-
-## When to load what
-
-| Action | Load |
-|---------------------------|---------------------------------------|
-| Create task | 00-context.md + 1-create-task.md |
-| Start task | 00-context.md + 2-start-task.md |
-| Complete task | 00-context.md + 3-complete-task.md |
-| Annotate / update task | 00-context.md + 4-annotate-update.md |
-| Review / overview tasks | 00-context.md + 5-review-overview.md |
-
-Always load 00-context.md first (project name resolution and
-global rules); then load the one action file that matches what
-you are doing.
-
-## Task lifecycle (overview)
-
-1. Create task
-2. Start task
-3. Annotate as you go
-4. Completion criteria (best practices, compilable, all tests
- pass, negative tests where plausible)
-5. Sub-agent review (fresh context)
-6. Main agent addresses all review comments
-7. Second sub-agent review (fresh context again) to confirm fixes
-8. Commit all changes to git
-9. Complete task
-
-A task is not done until criteria are met, all review comments
-are addressed, a second sub-agent review has confirmed the code,
-and all changes are committed to git. Details are in
-3-complete-task.md.
-```
-
-The key design decision is the table: the agent only loads the files relevant to what it is doing right now. Creating a task? Load `00-context.md` + `1-create-task.md`. Completing one? Load `00-context.md` + `3-complete-task.md`. This keeps context lean.
-
-### 00-context.md — project scoping and global rules
-
-This file is loaded with every action. It derives the project name from git and enforces that the agent only touches its own tasks (tagged `+agent`):
-
-```
-# Project Taskwarrior — shared context
-
-Load this with any of the action files (1–5) when working with tasks.
-It defines project scope and rules that apply to all task operations.
-
-## Project name
-
-Derive the project name from the git repository:
-
- basename -s .git \
- "$(git remote get-url origin 2>/dev/null)" 2>/dev/null \
- || basename "$(git rev-parse --show-toplevel)"
-
-Use it as project:<name> in every task command.
-
-## Rules that apply to all task commands
-
-- Project and tag matching: The agent only reads, modifies, or
- creates tasks that have both project:<name> and the +agent tag.
- Do not touch any task that does not have +agent set.
-- EVERY task command MUST include project:<name> — no exceptions.
- When listing or querying, also include +agent so only
- agent-managed tasks are shown. Never run a bare task without
- the project filter.
-- NEVER modify, delete, complete, start, or annotate tasks from
- other projects or tasks without +agent.
-- One task in progress per project. Do not start a second task
- while another is started and not completed, unless the user
- explicitly asks.
-- Parallel work via sub-agents — the agent may spawn sub-agents
- to work on tasks in parallel only after the user approves.
-```
-
-### 1-create-task.md — creating tasks with full context
-
-This is the most important file for setting up the autonomous loop. Every task must be self-contained — it must reference all files, docs, and specs needed so that an agent starting with zero prior context can work on it:
-
-```
-# Create task
-
-## Rules for new tasks
-
-- Create tasks in smaller chunks that fit into the context window.
- Break work into multiple tasks so that each task's scope,
- description, and required context can fit in one context window.
-- Every task MUST have at least one tag for sub-project/feature/area
- (e.g. +integrationtests, +flamegraph, +bpf, +cli).
-- When an agent creates a task, always add the tag +agent.
-- Include references to all context required to work on the task.
- Every task must list or link everything needed: relevant files,
- docs, specs, other tasks, or project guidelines. Put these in
- the task description or in an initial annotation.
-
-## Add a task
-
- task add project:<name> +<tag> +agent "Description"
-
-## With dependency
-
- task add project:<name> +<tag> +agent "Description" depends:<id>
-
-## Conventions
-
-- Keep tasks small: each task should fit in the context window.
-- Add dependencies when one task must complete before another.
-- Add references to all required context so the task is
- self-contained for fresh-context work.
-```
-
-### 2-start-task.md — fresh context per task
-
-This ensures each task gets a clean slate — no carry-over from previous work:
-
-```
-# Start task
-
-## Start each new task with a fresh context
-
-Work on each new task must begin with a fresh context — a new
-session or a sub-agent with no prior conversation. That way the
-task is executed with clear focus and no carry-over from other
-work. The task itself should already contain references to all
-required context; read the task description and all annotations
-to get files, docs, and specs before starting.
-
-## Mark task as started
-
-When you begin working on a task, always mark it as started:
-
- task <id> start
-
-Do this as soon as you start work on the task.
-
-## Conventions
-
-- Start each new task with a fresh context.
-- Run task <id> start when you start working.
-- Do not start a second task for the same project while one is
- already started and not done.
-```
-
-### 3-complete-task.md — the quality gate
-
-This is the heart of the skill. It enforces compilation, testing, negative tests, self-review, and a dual sub-agent review loop before any task can be marked done:
-
-```
-# Complete task
-
-## Completion criteria (required before "done")
-
-A task is not considered done until all of the following are true:
-
-- Best practices — the codebase follows the project's best
- practices.
-- Compilable — all code compiles successfully.
-- Tests pass — all tests pass.
-- Negative tests where plausible — for any new or changed tests,
- include negative tests wherever plausible.
-- All changes committed to git.
-
-## What the review sub-agent must check
-
-Review sub-agents (first and second review) must always:
-
-- Unit test coverage — double-check that coverage is as desired
- for the changed or added code.
-- Tests are testing real things — confirm that tests exercise
- real behavior and assertions, not only mocks. Flag tests that
- merely assert on mocks or stubs without verifying real logic.
-- Negative tests where plausible — for all tests created, ensure
- there are also negative tests. If positive tests exist but no
- corresponding negative tests, flag it.
-
-## Self-review before any sub-agent handoff
-
-Before signing off work to sub-agents for review, the main agent
-must ask itself:
-
-- Did everything I did make sense?
-- Isn't there a better way to do it?
-
-If the answer suggests improvements, address them first. Only
-then hand off to the sub-agent.
-
-## Before marking complete
-
-1. Self-review. Then spawn a sub-agent with fresh context.
-2. Sub-agent reviews the diff, code, or deliverables and reports
- back (review comments, suggestions, issues).
-3. Main agent addresses all review comments — no exceptions.
-4. Self-review again. Then spawn another sub-agent (fresh context)
- to review the code again and confirm the fixes. If this second
- review finds further issues, address them and repeat.
-5. Commit all changes to git.
-6. Only then: task <id> done
-
-## Conventions
-
-- A task is not done until: best practices met, code compiles,
- all tests pass, negative tests included, all review comments
- addressed, second sub-agent review confirmed, and all changes
- committed to git.
-```
-
-### 4-annotate-update-task.md — progress tracking
-
-```
-# Annotate / update task
-
-## Reading task context
-
-When working on a task, always read the full context: description,
-summary, and all annotations. Annotations often contain progress,
-challenges, and references to files or documents.
-
-## Annotate a task
-
- task <id> annotate "Note about progress or context"
-
-While making progress, add annotations to reflect progress,
-challenges, or decisions. Refer to files and documents so the
-task history stays useful for later work and for the
-pre-completion review.
-
-## Modify a task
-
- task <id> modify +<tag>
- task <id> modify depends:<id2>
- task <id> modify priority:H
-```
-
-### 5-review-overview-tasks.md — picking the next task
-
-```
-# Review / overview tasks
-
-## List tasks for the project
-
-Only list tasks that have +agent. Order by priority first, then
-urgency:
-
- task project:<name> +agent list sort:priority-,urgency-
-
-## Picking what to work on (next task)
-
-Order by priority first, then by urgency. Check already-started
-tasks first:
-
- task project:<name> +agent start.any: list
-
-If any tasks are already started, use one of those. Only if no
-tasks are in progress, show the next actionable (READY) task:
-
- task project:<name> +agent +READY list sort:priority-,urgency-
-
-## Blocked vs ready
-
- task project:<name> +agent +BLOCKED list
- task project:<name> +agent +READY list
-```
-
-## The reflection and review loop
-
-The real unlock was not just task automation — it was instructing Ampcode to reflect on its own work and then having it reviewed by a fresh pair of eyes.
-
-Having instructed in the skill for the agent to reflect on its own implementation ("Did everything I did make sense? Isn't there a better way?"), and then having a sub-agent with fresh context review all the changes and letting the main agent address the review comments, followed by another sub-agent reviewing the improvements again, made it a smooth ride.
-
-The sub-agent reviews consistently caught things the main agent missed — tests that only asserted on mocks, missing edge cases, and even a real bug. Without the dual review loop, the agent tends to write tests that look correct but do not actually exercise real behavior.
-
-## Code review: human spot-check at the end
-
-On top of the agent's self-reflection and the two sub-agent reviews per task, I reviewed the produced outcome at the end. I did not read through all 5k lines one by one. Instead I looked for repeating patterns across the test files and cherry-picked a few scenarios — for example one integration test from the open/close family, one from the rename/link family, and one negative test — and went through those in detail manually. That was enough to satisfy me that the workflow had produced consistent, runnable tests and that the whole pipeline (task → implement → self-review → sub-agent review → fix → second review → commit) was working as intended.
-
-## Measurable results
-
-Here is what one day of autonomous Ampcode work produced:
-
-* About 6 hours of autonomous work (16:13 to 22:03)
-* 48 Taskwarrior tasks completed
-* 47 git commits
-* 87 files changed
-* ~5,000 lines added, ~500 removed
-* 18 integration test files
-* 15 workload scenario files (one per syscall category)
-* 93 test scenarios total (happy-path and negative)
-* 13 syscall categories fully covered: open, read/write, close, dup, fcntl, rename, link, unlink, dir, stat, sync, truncate, and io_uring
-
-## A real bug found by the review loop
-
-During the negative test implementation for `close_range`, the review loop uncovered a real bug in ior's event loop. The `close_range` handler was deleting file descriptors from the internal `files` map before resolving their paths. This meant the path information was lost by the time ior tried to record it in the flamegraph. The fix was to look up the path first, then delete the fd. This bug would have been very hard to notice by reading the code — it only became apparent when a negative test expected a path in the output and got nothing.
-
-## Gotchas and lessons learned
-
-### Cost
-
-I burned through about 100 USD in one day on Ampcode's token-based pricing. The dual sub-agent reviews are thorough but token-heavy — each task effectively runs three agents (main plus two reviewers), and with 48 tasks that adds up fast. Lesson learned: I am subscribing to Claude Max next. If you are going to let an agent run autonomously for hours, flat-rate pricing is the way to go.
-
-### Syscall wrappers on amd64
-
-Go's `syscall` package on amd64 silently delegates to `*at` variants. `os.Open()` calls `openat`, `os.Mkdir()` calls `mkdirat`, `os.Stat()` calls `newfstatat`. The agent kept writing tests expecting `enter_open` when the kernel actually sees `enter_openat`. I had to burn this into task descriptions as a permanent note: "CRITICAL: Always verify what the actual syscall is before writing test expectations." Once this was in the task context, the agent got it right every time.
-
-### Task granularity matters
-
-Tasks that were too broad ("add all integration tests") produced worse results than tasks scoped to a single syscall category ("implement open_test.go + workload scenarios for open, openat, creat, open_by_handle_at"). The smaller tasks fit in the context window, the agent could focus, and the review loop could meaningfully check the output. Bigger tasks led to context degradation and the agent cutting corners.
-
-## How to replicate this
-
-The recipe:
-
-* Use Taskwarrior (or any task tracker the agent can query via CLI).
-* Create an agent skill that teaches the agent the task lifecycle: create, start, implement, self-review, sub-agent review, fix, second review, commit, done, hand off.
-* Front-load tasks with detailed descriptions and file references. Each task must be self-contained.
-* Tag tasks so the agent only works on its own tasks and does not touch anything else.
-* Instruct the agent to hand off to a fresh context after completing each task. In Ampcode, this is the handoff mechanism that spawns a new thread with a goal.
-* Enforce a quality gate: compilation, tests, negative tests, and dual sub-agent review before marking done.
-* Use flat-rate pricing if you plan to run autonomously for hours.
-
-The skill files shown above are generic — they work for any git project and any coding agent that can run shell commands. The Taskwarrior CLI is the interface; the skill markdown is the instruction set. You can adapt them to your own project by changing the tags and the completion criteria.
-
-=> https://taskwarrior.org Taskwarrior — command-line task management
-
-Other related posts:
-
-=> ./2026-02-02-tmux-popup-editor-for-cursor-agent-prompts.gmi 2026-02-02 A tmux popup editor for Cursor Agent CLI prompts
-=> ./2023-07-17-career-guide-and-soft-skills-book-notes.gmi 2023-07-17 "Software Developers Career Guide and Soft Skills" book notes
-
-E-Mail your comments to `paul@nospam.buetow.org` :-)
-
-=> ../ Back to the main site
diff --git a/gemfeed/DRAFT-taskwarrior-autonomous-agent-loop.gmi.tpl b/gemfeed/DRAFT-taskwarrior-autonomous-agent-loop.gmi.tpl
deleted file mode 100644
index c62e92e1..00000000
--- a/gemfeed/DRAFT-taskwarrior-autonomous-agent-loop.gmi.tpl
+++ /dev/null
@@ -1,451 +0,0 @@
-# Taskwarrior as an autonomous AI agent loop: 48 tasks in one day
-
-> Published at 2026-02-21T23:11:13+02:00
-
-=> ./taskwarrior-autonomous-agent/ior-flamegraph.png Example ior flamegraph showing I/O syscall activity by process, file path, and tracepoint
-
-I let Ampcode autonomously complete 48 Taskwarrior tasks on my eBPF project in a single day. The agent picked up one task after another — implemented, self-reviewed, spawned sub-agent reviews, addressed comments, committed, and moved on — all without me intervening. Here is how the setup works, what the project is about, and the full skill that drives the loop.
-
-=> https://ampcode.com Ampcode — the AI coding agent used for this project
-
-<< template::inline::toc
-
-## What is ior and what does it do
-
-I/O Riot NG (ior) is a Linux-only tool that traces synchronous I/O system calls in real time and produces flamegraphs showing which processes spend time on which files with which syscalls. It is written in Go and C, using eBPF via libbpfgo. It is the spiritual successor of an older project of mine called I/O Riot, which was based on SystemTap and C.
-
-=> ./taskwarrior-autonomous-agent/ior-logo.png I/O Riot NG logo
-
-=> https://codeberg.org/snonux/ior I/O Riot NG on Codeberg
-=> https://codeberg.org/snonux/ioriot The original I/O Riot (SystemTap)
-
-At the top of the blog post you see an example flamegraph produced by ior. The x-axis shows sample count (how frequent each I/O operation is), and the stack from bottom to top shows process ID, file path, and syscall tracepoint. You can immediately see which processes hammer which files with which syscalls.
-
-### What is a syscall
-
-A syscall (system call) is the interface between a user-space program and the Linux kernel. When a program wants to do anything that touches hardware or shared resources — open a file, read from a socket, write to disk, create a directory, check file permissions — it cannot do it directly. User-space programs run in an unprivileged CPU mode and have no access to hardware. They must ask the kernel by making a syscall.
-
-For example, when a program calls `open("/etc/passwd", O_RDONLY)`, it triggers the `openat` syscall. The CPU switches from user mode to kernel mode, the kernel validates the request, locates the file on disk, allocates a file descriptor, and returns it to the program — or returns an error code like ENOENT if the file does not exist. Every file operation, every network packet, every process fork goes through syscalls. They are the fundamental boundary between "your code" and "the operating system."
-
-There are hundreds of syscalls in Linux. The I/O-related ones that ior traces include:
-
-* `openat`, `creat`, `open_by_handle_at` — opening files
-* `read`, `write`, `pread64`, `pwrite64`, `readv`, `writev` — reading and writing data
-* `close`, `close_range` — closing file descriptors
-* `dup`, `dup2`, `dup3` — duplicating file descriptors
-* `fcntl` — manipulating file descriptor properties
-* `rename`, `renameat`, `renameat2` — renaming files
-* `link`, `linkat`, `symlink`, `symlinkat`, `readlinkat` — creating and reading links
-* `unlink`, `unlinkat`, `rmdir` — removing files and directories
-* `mkdir`, `mkdirat`, `chdir`, `getdents64` — directory operations
-* `stat`, `fstat`, `lstat`, `newfstatat`, `statx`, `access`, `faccessat` — file metadata
-* `fsync`, `fdatasync`, `sync`, `sync_file_range` — flushing data to disk
-* `truncate`, `ftruncate` — resizing files
-* `io_uring_setup`, `io_uring_enter`, `io_uring_register` — async I/O
-
-### What is eBPF
-
-eBPF (extended Berkeley Packet Filter) is a technology in the Linux kernel that lets you run sandboxed programs inside the kernel without changing kernel source code or loading kernel modules. Originally designed for network packet filtering, it has grown into a general-purpose in-kernel virtual machine.
-
-With eBPF, you write small C programs that the kernel verifies for safety (no infinite loops, no out-of-bounds access, no crashing the kernel) and then runs at de-facto native speed in a VM inside of the Linux Kernel. These programs can attach to tracepoints — predefined instrumentation points in the kernel that fire whenever a specific event occurs, such as a syscall being entered or exited.
-
-ior uses eBPF to attach to the entry and exit tracepoints of every I/O-related syscall. When any process on the system calls `openat`, for example, the kernel fires the `sys_enter_openat` tracepoint, ior's BPF program captures the filename, PID, thread ID, and timestamp, and sends that data to user-space via a ring buffer. When the syscall returns, the `sys_exit_openat` tracepoint fires, and ior captures the return value and duration. This happens with near-zero overhead because the BPF program runs inside the kernel — there is no context switch to user-space for each event.
-
-### What ior traces and why
-
-ior pairs up syscall enter and exit events, tracks which file descriptors map to which file paths, and aggregates everything into a data structure that can be serialized to a compressed `.ior.zst` file or rendered as a flamegraph. The flamegraph shows a hierarchy of PID, file path, and syscall tracepoint, with the width proportional to how often or how long each combination occurs.
-
-This is useful for diagnosing I/O bottlenecks: you can see at a glance that process 5171 is spending most of its time writing to `/sys/fs/cgroup/memory.stat`, or that your database is doing thousands of `fsync` calls per second on its WAL file. Traditional tools like `strace` can show you this too, but `strace` uses ptrace which has significant overhead and slows down the traced process. eBPF-based tracing is orders of magnitude faster.
-
-## The problem: writing a full test suite by hand
-
-The ior project needed a comprehensive test suite at two levels:
-
-* Unit tests in `internal/eventloop_test.go` — these simulate raw BPF tracepoint data (byte slices), feed them into the event loop, and verify that enter/exit events are correctly paired, file descriptors are tracked, comm names are propagated, and filters work. No BPF, no kernel, no root required.
-* Integration tests in `integrationtests/` — these launch a real `ioworkload` binary that performs actual syscalls, start ior with real BPF tracing against that process, wait for it to finish, and then parse the resulting `.ior.zst` file to verify that the expected tracepoints were captured. These require root and a running kernel with BPF support.
-
-Both levels needed happy-path tests (does it work correctly?) and negative tests (does it handle errors like ENOENT, EBADF, EEXIST, EINVAL correctly?). Across 13 syscall categories, that is a lot of test code — roughly 93 scenarios, each with its own workload implementation and test assertions. Having the LLM to instruct each of those tasks would have taken days and writing all of this by hand would take months.
-
-## Before and after
-
-Before I set up the Taskwarrior skill, my workflow with Ampcode looked like this: I would manually review the agent's output, then instruct it what to do next. One task at a time, constant babysitting. The agent had no memory of what was done or what was next. Context would degrade as the conversation grew longer.
-
-After: I front-loaded about 48 tasks into Taskwarrior with detailed descriptions and file references (Ampcode itself helped here to create the tasks as well), then told Ampcode a single instruction: "complete this task, then automatically proceed to the next ready +integrationtests task by handing off with fresh context." It ran for about 6 hours autonomously. I reviewed the commits over coffee.
-
-The key difference is that Taskwarrior acts as persistent memory and a work queue. The agent does not need to remember what it did — the task list tells it what is done and what is next. Each task hands off to a fresh Ampcode thread, so there is no context window degradation. Ampcode's handoff mechanism — where one thread spawns a new one with a goal description — maps perfectly onto Taskwarrior's task-by-task workflow.
-
-## How the project-taskwarrior skill works
-
-```
- ┌──────────────────────────────────────────────────┐
- │ │
- │ task add pro:ior "implement open_test.go" +agent │
- │ task add pro:ior "implement close_test.go" +agent│
- │ task add pro:ior "add negative tests" +agent │
- │ ... × 48 │
- │ │
- │ ┌─────────┐ ┌──────────┐ ┌──────────┐ │
- │ │ Agent │ ─▶│ Self- │ ─▶│ Sub-agent│ │
- │ │ works │ │ review │ │ review │ │
- │ └─────────┘ └──────────┘ └──────────┘ │
- │ │ │ │ │
- │ │ fix │ fix │ │
- │ │◀─────────────┘◀─────────────┘ │
- │ │ │
- │ ▼ │
- │ git commit + push │
- │ task <id> done │
- │ ──▶ hand off to next task (fresh context) │
- │ │
- └──────────────────────────────────────────────────┘
-```
-
-The skill lives in `~/.agents/skills/project-taskwarrior/` and consists of a `SKILL.md` entry point plus six markdown files — one per action. The agent loads only the files it needs for the current action, so it does not waste context on instructions it does not need right now.
-
-### SKILL.md — the entry point
-
-Every Ampcode skill has a `SKILL.md` with YAML frontmatter (name, description, trigger phrases) and an overview. This is what the agent sees first when it loads the skill:
-
-```
----
-name: project-taskwarrior
-description: "Manage Taskwarrior tasks scoped to the current git
- project. Use when asked to list, add, start, complete, annotate,
- or organize tasks for the project. Triggers on: tasks, todo,
- task list, pick next task, what's next."
----
-
-# Project Taskwarrior
-
-Taskwarrior tasks are scoped to the current git repository.
-Load only the files you need for the current action so the whole
-skill does not need to be in context.
-
-## When to load what
-
-| Action | Load |
-|---------------------------|---------------------------------------|
-| Create task | 00-context.md + 1-create-task.md |
-| Start task | 00-context.md + 2-start-task.md |
-| Complete task | 00-context.md + 3-complete-task.md |
-| Annotate / update task | 00-context.md + 4-annotate-update.md |
-| Review / overview tasks | 00-context.md + 5-review-overview.md |
-
-Always load 00-context.md first (project name resolution and
-global rules); then load the one action file that matches what
-you are doing.
-
-## Task lifecycle (overview)
-
-1. Create task
-2. Start task
-3. Annotate as you go
-4. Completion criteria (best practices, compilable, all tests
- pass, negative tests where plausible)
-5. Sub-agent review (fresh context)
-6. Main agent addresses all review comments
-7. Second sub-agent review (fresh context again) to confirm fixes
-8. Commit all changes to git
-9. Complete task
-
-A task is not done until criteria are met, all review comments
-are addressed, a second sub-agent review has confirmed the code,
-and all changes are committed to git. Details are in
-3-complete-task.md.
-```
-
-The key design decision is the table: the agent only loads the files relevant to what it is doing right now. Creating a task? Load `00-context.md` + `1-create-task.md`. Completing one? Load `00-context.md` + `3-complete-task.md`. This keeps context lean.
-
-### 00-context.md — project scoping and global rules
-
-This file is loaded with every action. It derives the project name from git and enforces that the agent only touches its own tasks (tagged `+agent`):
-
-```
-# Project Taskwarrior — shared context
-
-Load this with any of the action files (1–5) when working with tasks.
-It defines project scope and rules that apply to all task operations.
-
-## Project name
-
-Derive the project name from the git repository:
-
- basename -s .git \
- "$(git remote get-url origin 2>/dev/null)" 2>/dev/null \
- || basename "$(git rev-parse --show-toplevel)"
-
-Use it as project:<name> in every task command.
-
-## Rules that apply to all task commands
-
-- Project and tag matching: The agent only reads, modifies, or
- creates tasks that have both project:<name> and the +agent tag.
- Do not touch any task that does not have +agent set.
-- EVERY task command MUST include project:<name> — no exceptions.
- When listing or querying, also include +agent so only
- agent-managed tasks are shown. Never run a bare task without
- the project filter.
-- NEVER modify, delete, complete, start, or annotate tasks from
- other projects or tasks without +agent.
-- One task in progress per project. Do not start a second task
- while another is started and not completed, unless the user
- explicitly asks.
-- Parallel work via sub-agents — the agent may spawn sub-agents
- to work on tasks in parallel only after the user approves.
-```
-
-### 1-create-task.md — creating tasks with full context
-
-This is the most important file for setting up the autonomous loop. Every task must be self-contained — it must reference all files, docs, and specs needed so that an agent starting with zero prior context can work on it:
-
-```
-# Create task
-
-## Rules for new tasks
-
-- Create tasks in smaller chunks that fit into the context window.
- Break work into multiple tasks so that each task's scope,
- description, and required context can fit in one context window.
-- Every task MUST have at least one tag for sub-project/feature/area
- (e.g. +integrationtests, +flamegraph, +bpf, +cli).
-- When an agent creates a task, always add the tag +agent.
-- Include references to all context required to work on the task.
- Every task must list or link everything needed: relevant files,
- docs, specs, other tasks, or project guidelines. Put these in
- the task description or in an initial annotation.
-
-## Add a task
-
- task add project:<name> +<tag> +agent "Description"
-
-## With dependency
-
- task add project:<name> +<tag> +agent "Description" depends:<id>
-
-## Conventions
-
-- Keep tasks small: each task should fit in the context window.
-- Add dependencies when one task must complete before another.
-- Add references to all required context so the task is
- self-contained for fresh-context work.
-```
-
-### 2-start-task.md — fresh context per task
-
-This ensures each task gets a clean slate — no carry-over from previous work:
-
-```
-# Start task
-
-## Start each new task with a fresh context
-
-Work on each new task must begin with a fresh context — a new
-session or a sub-agent with no prior conversation. That way the
-task is executed with clear focus and no carry-over from other
-work. The task itself should already contain references to all
-required context; read the task description and all annotations
-to get files, docs, and specs before starting.
-
-## Mark task as started
-
-When you begin working on a task, always mark it as started:
-
- task <id> start
-
-Do this as soon as you start work on the task.
-
-## Conventions
-
-- Start each new task with a fresh context.
-- Run task <id> start when you start working.
-- Do not start a second task for the same project while one is
- already started and not done.
-```
-
-### 3-complete-task.md — the quality gate
-
-This is the heart of the skill. It enforces compilation, testing, negative tests, self-review, and a dual sub-agent review loop before any task can be marked done:
-
-```
-# Complete task
-
-## Completion criteria (required before "done")
-
-A task is not considered done until all of the following are true:
-
-- Best practices — the codebase follows the project's best
- practices.
-- Compilable — all code compiles successfully.
-- Tests pass — all tests pass.
-- Negative tests where plausible — for any new or changed tests,
- include negative tests wherever plausible.
-- All changes committed to git.
-
-## What the review sub-agent must check
-
-Review sub-agents (first and second review) must always:
-
-- Unit test coverage — double-check that coverage is as desired
- for the changed or added code.
-- Tests are testing real things — confirm that tests exercise
- real behavior and assertions, not only mocks. Flag tests that
- merely assert on mocks or stubs without verifying real logic.
-- Negative tests where plausible — for all tests created, ensure
- there are also negative tests. If positive tests exist but no
- corresponding negative tests, flag it.
-
-## Self-review before any sub-agent handoff
-
-Before signing off work to sub-agents for review, the main agent
-must ask itself:
-
-- Did everything I did make sense?
-- Isn't there a better way to do it?
-
-If the answer suggests improvements, address them first. Only
-then hand off to the sub-agent.
-
-## Before marking complete
-
-1. Self-review. Then spawn a sub-agent with fresh context.
-2. Sub-agent reviews the diff, code, or deliverables and reports
- back (review comments, suggestions, issues).
-3. Main agent addresses all review comments — no exceptions.
-4. Self-review again. Then spawn another sub-agent (fresh context)
- to review the code again and confirm the fixes. If this second
- review finds further issues, address them and repeat.
-5. Commit all changes to git.
-6. Only then: task <id> done
-
-## Conventions
-
-- A task is not done until: best practices met, code compiles,
- all tests pass, negative tests included, all review comments
- addressed, second sub-agent review confirmed, and all changes
- committed to git.
-```
-
-### 4-annotate-update-task.md — progress tracking
-
-```
-# Annotate / update task
-
-## Reading task context
-
-When working on a task, always read the full context: description,
-summary, and all annotations. Annotations often contain progress,
-challenges, and references to files or documents.
-
-## Annotate a task
-
- task <id> annotate "Note about progress or context"
-
-While making progress, add annotations to reflect progress,
-challenges, or decisions. Refer to files and documents so the
-task history stays useful for later work and for the
-pre-completion review.
-
-## Modify a task
-
- task <id> modify +<tag>
- task <id> modify depends:<id2>
- task <id> modify priority:H
-```
-
-### 5-review-overview-tasks.md — picking the next task
-
-```
-# Review / overview tasks
-
-## List tasks for the project
-
-Only list tasks that have +agent. Order by priority first, then
-urgency:
-
- task project:<name> +agent list sort:priority-,urgency-
-
-## Picking what to work on (next task)
-
-Order by priority first, then by urgency. Check already-started
-tasks first:
-
- task project:<name> +agent start.any: list
-
-If any tasks are already started, use one of those. Only if no
-tasks are in progress, show the next actionable (READY) task:
-
- task project:<name> +agent +READY list sort:priority-,urgency-
-
-## Blocked vs ready
-
- task project:<name> +agent +BLOCKED list
- task project:<name> +agent +READY list
-```
-
-## The reflection and review loop
-
-The real unlock was not just task automation — it was instructing Ampcode to reflect on its own work and then having it reviewed by a fresh pair of eyes.
-
-Having instructed in the skill for the agent to reflect on its own implementation ("Did everything I did make sense? Isn't there a better way?"), and then having a sub-agent with fresh context review all the changes and letting the main agent address the review comments, followed by another sub-agent reviewing the improvements again, made it a smooth ride.
-
-The sub-agent reviews consistently caught things the main agent missed — tests that only asserted on mocks, missing edge cases, and even a real bug. Without the dual review loop, the agent tends to write tests that look correct but do not actually exercise real behavior.
-
-## Code review: human spot-check at the end
-
-On top of the agent's self-reflection and the two sub-agent reviews per task, I reviewed the produced outcome at the end. I did not read through all 5k lines one by one. Instead I looked for repeating patterns across the test files and cherry-picked a few scenarios — for example one integration test from the open/close family, one from the rename/link family, and one negative test — and went through those in detail manually. That was enough to satisfy me that the workflow had produced consistent, runnable tests and that the whole pipeline (task → implement → self-review → sub-agent review → fix → second review → commit) was working as intended.
-
-## Measurable results
-
-Here is what one day of autonomous Ampcode work produced:
-
-* About 6 hours of autonomous work (16:13 to 22:03)
-* 48 Taskwarrior tasks completed
-* 47 git commits
-* 87 files changed
-* ~5,000 lines added, ~500 removed
-* 18 integration test files
-* 15 workload scenario files (one per syscall category)
-* 93 test scenarios total (happy-path and negative)
-* 13 syscall categories fully covered: open, read/write, close, dup, fcntl, rename, link, unlink, dir, stat, sync, truncate, and io_uring
-
-## A real bug found by the review loop
-
-During the negative test implementation for `close_range`, the review loop uncovered a real bug in ior's event loop. The `close_range` handler was deleting file descriptors from the internal `files` map before resolving their paths. This meant the path information was lost by the time ior tried to record it in the flamegraph. The fix was to look up the path first, then delete the fd. This bug would have been very hard to notice by reading the code — it only became apparent when a negative test expected a path in the output and got nothing.
-
-## Gotchas and lessons learned
-
-### Cost
-
-I burned through about 100 USD in one day on Ampcode's token-based pricing. The dual sub-agent reviews are thorough but token-heavy — each task effectively runs three agents (main plus two reviewers), and with 48 tasks that adds up fast. Lesson learned: I am subscribing to Claude Max next. If you are going to let an agent run autonomously for hours, flat-rate pricing is the way to go.
-
-### Syscall wrappers on amd64
-
-Go's `syscall` package on amd64 silently delegates to `*at` variants. `os.Open()` calls `openat`, `os.Mkdir()` calls `mkdirat`, `os.Stat()` calls `newfstatat`. The agent kept writing tests expecting `enter_open` when the kernel actually sees `enter_openat`. I had to burn this into task descriptions as a permanent note: "CRITICAL: Always verify what the actual syscall is before writing test expectations." Once this was in the task context, the agent got it right every time.
-
-### Task granularity matters
-
-Tasks that were too broad ("add all integration tests") produced worse results than tasks scoped to a single syscall category ("implement open_test.go + workload scenarios for open, openat, creat, open_by_handle_at"). The smaller tasks fit in the context window, the agent could focus, and the review loop could meaningfully check the output. Bigger tasks led to context degradation and the agent cutting corners.
-
-## How to replicate this
-
-The recipe:
-
-* Use Taskwarrior (or any task tracker the agent can query via CLI).
-* Create an agent skill that teaches the agent the task lifecycle: create, start, implement, self-review, sub-agent review, fix, second review, commit, done, hand off.
-* Front-load tasks with detailed descriptions and file references. Each task must be self-contained.
-* Tag tasks so the agent only works on its own tasks and does not touch anything else.
-* Instruct the agent to hand off to a fresh context after completing each task. In Ampcode, this is the handoff mechanism that spawns a new thread with a goal.
-* Enforce a quality gate: compilation, tests, negative tests, and dual sub-agent review before marking done.
-* Use flat-rate pricing if you plan to run autonomously for hours.
-
-The skill files shown above are generic — they work for any git project and any coding agent that can run shell commands. The Taskwarrior CLI is the interface; the skill markdown is the instruction set. You can adapt them to your own project by changing the tags and the completion criteria.
-
-=> https://taskwarrior.org Taskwarrior — command-line task management
-
-Other related posts:
-
-<< template::inline::rindex ampcode agent skill taskwarrior autonomous
-
-E-Mail your comments to `paul@nospam.buetow.org` :-)
-
-=> ../ Back to the main site