diff options
| author | Paul Buetow <paul@buetow.org> | 2025-10-02 11:31:38 +0300 |
|---|---|---|
| committer | Paul Buetow <paul@buetow.org> | 2025-10-02 11:31:38 +0300 |
| commit | ff9f3a641fec256e1f4b01fcd95590451f656f0a (patch) | |
| tree | 04f4b0d8d370006bd2cc22e35c4ce76f7ba134d6 | |
| parent | c0f9ecf5e0b075db8e54ef1235ec80878e418398 (diff) | |
Update content for html
187 files changed, 105 insertions, 10224 deletions
diff --git a/about/resources.html b/about/resources.html index b31929e3..57f73b1a 100644 --- a/about/resources.html +++ b/about/resources.html @@ -50,109 +50,109 @@ <span>In random order:</span><br /> <br /> <ul> -<li>21st Century C: C Tips from the New School; Ben Klemens; O'Reilly</li> +<li>Modern Perl; Chromatic ; Onyx Neon Press</li> +<li>Terraform Cookbook; Mikael Krief; Packt Publishing</li> +<li>Java ist auch eine Insel; Christian Ullenboom; </li> <li>The Pragmatic Programmer; David Thomas; Addison-Wesley</li> +<li>Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall & Jon Orwant; O'Reilly</li> +<li>Perl New Features; Joshua McAdams, brian d foy; Perl School</li> +<li>Raku Recipes; J.J. Merelo; Apress</li> +<li>The Kubernetes Book; Nigel Poulton; Unabridged Audiobook</li> +<li>The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton</li> +<li>Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner</li> +<li>Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press</li> +<li>DevOps And Site Reliability Engineering Handbook; Stephen Fleming; Audible</li> +<li>Chaos Engineering - System Resiliency in Practice; Casey Rosenthal and Nora Jones; eBook</li> +<li>Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly</li> +<li>The Docker Book; James Turnbull; Kindle</li> <li>Ultimate Go Notebook; Bill Kennedy</li> +<li>Kubernetes Cookbook; Sameer Naik, Sébastien Goasguen, Jonathan Michaux; O'Reilly</li> <li>97 things every SRE should know; Emil Stolarsky, Jaime Woo; O'Reilly</li> +<li>Effective awk programming; Arnold Robbins; O'Reilly</li> +<li>100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications</li> +<li>Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers</li> +<li>Pro Puppet; James Turnbull, Jeffrey McCune; Apress</li> +<li>Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O'Reilly</li> +<li>The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress</li> <li>C++ Programming Language; Bjarne Stroustrup;</li> -<li>Modern Perl; Chromatic ; Onyx Neon Press</li> -<li>Funktionale Programmierung; Peter Pepper; Springer</li> -<li>DevOps And Site Reliability Engineering Handbook; Stephen Fleming; Audible</li> +<li>Concurrency in Go; Katherine Cox-Buday; O'Reilly</li> +<li>Raku Fundamentals; Moritz Lenz; Apress</li> +<li>21st Century C: C Tips from the New School; Ben Klemens; O'Reilly</li> +<li>Polished Ruby Programming; Jeremy Evans; Packt Publishing</li> <li>Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt </li> -<li>The Docker Book; James Turnbull; Kindle</li> +<li>Effective Java; Joshua Bloch; Addison-Wesley Professional</li> +<li>Leanring eBPF; Liz Rice; O'Reilly</li> <li>Distributed Systems: Principles and Paradigms; Andrew S. Tanenbaum; Pearson</li> -<li>The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton</li> <li>The DevOps Handbook; Gene Kim, Jez Humble, Patrick Debois, John Willis; Audible</li> -<li>Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications</li> -<li>Effective awk programming; Arnold Robbins; O'Reilly</li> -<li>Java ist auch eine Insel; Christian Ullenboom; </li> -<li>The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress</li> -<li>Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall & Jon Orwant; O'Reilly</li> -<li>Site Reliability Engineering; How Google runs production systems; O'Reilly</li> +<li>Funktionale Programmierung; Peter Pepper; Springer</li> +<li>Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press</li> <li>Higher Order Perl; Mark Dominus; Morgan Kaufmann</li> -<li>Polished Ruby Programming; Jeremy Evans; Packt Publishing</li> -<li>Leanring eBPF; Liz Rice; O'Reilly</li> -<li>Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly</li> -<li>Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O'Reilly</li> -<li>Data Science at the Command Line; Jeroen Janssens; O'Reilly</li> -<li>Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers</li> <li>Programming Ruby 3.3 (5th Edition); Noel Rappin, with Dave Thomas; The Pragmatic Bookshelf</li> -<li>Kubernetes Cookbook; Sameer Naik, Sébastien Goasguen, Jonathan Michaux; O'Reilly</li> -<li>Chaos Engineering - System Resiliency in Practice; Casey Rosenthal and Nora Jones; eBook</li> -<li>100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications</li> -<li>Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O'Reilly</li> -<li>Raku Fundamentals; Moritz Lenz; Apress</li> -<li>Terraform Cookbook; Mikael Krief; Packt Publishing</li> -<li>Tmux 2: Productive Mouse-free Development; Brain P. Hogan; The Pragmatic Programmers </li> -<li>Systemprogrammierung in Go; Frank Müller; dpunkt</li> -<li>The Kubernetes Book; Nigel Poulton; Unabridged Audiobook</li> -<li>DNS and BIND; Cricket Liu; O'Reilly</li> -<li>Pro Puppet; James Turnbull, Jeffrey McCune; Apress</li> <li>Developing Games in Java; David Brackeen and others...; New Riders</li> -<li>Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press</li> +<li>Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O'Reilly</li> +<li>Site Reliability Engineering; How Google runs production systems; O'Reilly</li> +<li>Systemprogrammierung in Go; Frank Müller; dpunkt</li> <li>The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional</li> -<li>Raku Recipes; J.J. Merelo; Apress</li> -<li>Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press</li> -<li>Perl New Features; Joshua McAdams, brian d foy; Perl School</li> -<li>Effective Java; Joshua Bloch; Addison-Wesley Professional</li> -<li>Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner</li> -<li>Concurrency in Go; Katherine Cox-Buday; O'Reilly</li> +<li>Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications</li> +<li>Tmux 2: Productive Mouse-free Development; Brain P. Hogan; The Pragmatic Programmers </li> +<li>Data Science at the Command Line; Jeroen Janssens; O'Reilly</li> +<li>DNS and BIND; Cricket Liu; O'Reilly</li> </ul><br /> <h2 style='display: inline' id='technical-references'>Technical references</h2><br /> <br /> <span>I didn't read them from the beginning to the end, but I am using them to look up things. The books are in random order:</span><br /> <br /> <ul> -<li>BPF Performance Tools - Linux System and Application Observability, Brendan Gregg; Addison Wesley</li> -<li>Relayd and Httpd Mastery; Michael W Lucas</li> <li>Implementing Service Level Objectives; Alex Hidalgo; O'Reilly</li> -<li>Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O'Reilly</li> -<li>The Linux Programming Interface; Michael Kerrisk; No Starch Press </li> +<li>Groovy Kurz & Gut; Joerg Staudemeier; O'Reilly</li> <li>Algorithms; Robert Sedgewick, Kevin Wayne; Addison Wesley</li> +<li>The Linux Programming Interface; Michael Kerrisk; No Starch Press </li> <li>Go: Design Patterns for Real-World Projects; Mat Ryer; Packt</li> -<li>Groovy Kurz & Gut; Joerg Staudemeier; O'Reilly</li> +<li>Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O'Reilly</li> +<li>BPF Performance Tools - Linux System and Application Observability, Brendan Gregg; Addison Wesley</li> +<li>Relayd and Httpd Mastery; Michael W Lucas</li> </ul><br /> <h2 style='display: inline' id='self-development-and-soft-skills-books'>Self-development and soft-skills books</h2><br /> <br /> <span>In random order:</span><br /> <br /> <ul> -<li>Stop starting, start finishing; Arne Roock; Lean-Kanban University </li> -<li>97 Things Every Engineering Manager Should Know; Camille Fournier; Audiobook</li> -<li>Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing</li> -<li>Never Split the Difference; Chris Voss, Tahl Raz; Random House Business</li> -<li>Staff Engineer: Leadership beyond the management track; Will Larson; Audiobook</li> +<li>The Obstacle Is The Way; Ryan Holiday; Profile Books Ltd</li> +<li>The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books</li> +<li>So Good They Can't Ignore You; Cal Newport; Business Plus</li> +<li>The Good Enough Job; Simone Stolzoff; Ebury Edge</li> <li>101 Essays that change the way you think; Brianna Wiest; Audiobook</li> +<li>Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne</li> +<li>Deep Work; Cal Newport; Piatkus</li> +<li>Staff Engineer: Leadership beyond the management track; Will Larson; Audiobook</li> +<li>The Bullet Journal Method; Ryder Carroll; Fourth Estate</li> <li>Eat That Frog; Brian Tracy</li> <li>Influence without Authority; A. Cohen, D. Bradford; Wiley</li> -<li>Coders at Work - Reflections on the craft of programming, Peter Seibel and Mitchell Dorian et al., Audiobook</li> -<li>Psycho-Cybernetics; Maxwell Maltz; Perigee Books</li> -<li>Atomic Habits; James Clear; Random House Business</li> -<li>The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)</li> -<li>The Bullet Journal Method; Ryder Carroll; Fourth Estate</li> -<li>Eat That Frog!; Brian Tracy; Hodder Paperbacks</li> -<li>Slow Productivity; Cal Newport; Penguin Random House</li> <li>Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)</li> -<li>The Obstacle Is The Way; Ryan Holiday; Profile Books Ltd</li> -<li>Ultralearning; Scott Young; Thorsons</li> -<li>The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK</li> -<li>Digital Minimalism; Cal Newport; Portofolio Penguin</li> -<li>Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion</li> -<li>Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne</li> +<li>Never Split the Difference; Chris Voss, Tahl Raz; Random House Business</li> <li>Meditation for Mortals, Oliver Burkeman, Audiobook</li> -<li>Getting Things Done; David Allen</li> -<li>The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books</li> -<li>The Good Enough Job; Simone Stolzoff; Ebury Edge</li> <li>The Power of Now; Eckhard Tolle; Yellow Kite</li> +<li>97 Things Every Engineering Manager Should Know; Camille Fournier; Audiobook</li> +<li>The Complete Software Developer's Career Guide; John Sonmez; Unabridged Audiobook</li> <li>The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select</li> -<li>Deep Work; Cal Newport; Piatkus</li> -<li>So Good They Can't Ignore You; Cal Newport; Business Plus</li> -<li>Soft Skills; John Sommez; Manning Publications</li> +<li>Getting Things Done; David Allen</li> +<li>Ultralearning; Anna Laurent; Self-published via Amazon</li> <li>Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press</li> <li>The Joy of Missing Out; Christina Crook; New Society Publishers</li> +<li>Psycho-Cybernetics; Maxwell Maltz; Perigee Books</li> +<li>Soft Skills; John Sommez; Manning Publications</li> <li>Time Management for System Administrators; Thomas A. Limoncelli; O'Reilly</li> -<li>The Complete Software Developer's Career Guide; John Sonmez; Unabridged Audiobook</li> -<li>Ultralearning; Anna Laurent; Self-published via Amazon</li> +<li>Coders at Work - Reflections on the craft of programming, Peter Seibel and Mitchell Dorian et al., Audiobook</li> +<li>Eat That Frog!; Brian Tracy; Hodder Paperbacks</li> +<li>Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion</li> +<li>The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)</li> +<li>Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing</li> +<li>Ultralearning; Scott Young; Thorsons</li> +<li>Slow Productivity; Cal Newport; Penguin Random House</li> +<li>Atomic Habits; James Clear; Random House Business</li> +<li>Digital Minimalism; Cal Newport; Portofolio Penguin</li> +<li>Stop starting, start finishing; Arne Roock; Lean-Kanban University </li> +<li>The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK</li> </ul><br /> <a class='textlink' href='../notes/index.html'>Here are notes of mine for some of the books</a><br /> <br /> @@ -161,31 +161,31 @@ <span>Some of these were in-person with exams; others were online learning lectures only. In random order:</span><br /> <br /> <ul> -<li>Cloud Operations on AWS - Learn how to configure, deploy, maintain, and troubleshoot your AWS environments; 3-day online live training with labs; Amazon</li> -<li>Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online</li> -<li>MySQL Deep Dive Workshop; 2-day on-site training</li> -<li>Red Hat Certified System Administrator; Course + certification (Although I had the option, I decided not to take the next course as it is more effective to self learn what I need)</li> -<li>The Well-Grounded Rubyist Video Edition; David. A. Black; O'Reilly Online</li> -<li>F5 Loadbalancers Training; 2-day on-site training; F5, Inc. </li> -<li>Developing IaC with Terraform (with Live Lessons); O'Reilly Online</li> -<li>Scripting Vim; Damian Conway; O'Reilly Online</li> <li>AWS Immersion Day; Amazon; 1-day interactive online training </li> -<li>Structure and Interpretation of Computer Programs; Harold Abelson and more...; </li> -<li>Apache Tomcat Best Practises; 3-day on-site training</li> <li>The Ultimate Kubernetes Bootcamp; School of Devops; O'Reilly Online</li> -<li>Linux Security and Isolation APIs Training; Michael Kerrisk; 3-day on-site training</li> -<li>Protocol buffers; O'Reilly Online</li> +<li>Apache Tomcat Best Practises; 3-day on-site training</li> <li>Ultimate Go Programming; Bill Kennedy; O'Reilly Online</li> +<li>F5 Loadbalancers Training; 2-day on-site training; F5, Inc. </li> <li>Functional programming lecture; Remote University of Hagen</li> +<li>Developing IaC with Terraform (with Live Lessons); O'Reilly Online</li> +<li>Linux Security and Isolation APIs Training; Michael Kerrisk; 3-day on-site training</li> +<li>Red Hat Certified System Administrator; Course + certification (Although I had the option, I decided not to take the next course as it is more effective to self learn what I need)</li> +<li>MySQL Deep Dive Workshop; 2-day on-site training</li> +<li>The Well-Grounded Rubyist Video Edition; David. A. Black; O'Reilly Online</li> +<li>Cloud Operations on AWS - Learn how to configure, deploy, maintain, and troubleshoot your AWS environments; 3-day online live training with labs; Amazon</li> +<li>Protocol buffers; O'Reilly Online</li> +<li>Structure and Interpretation of Computer Programs; Harold Abelson and more...; </li> +<li>Scripting Vim; Damian Conway; O'Reilly Online</li> +<li>Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online</li> </ul><br /> <h2 style='display: inline' id='technical-guides'>Technical guides</h2><br /> <br /> <span>These are not whole books, but guides (smaller or larger) which I found very useful. in random order:</span><br /> <br /> <ul> +<li>Raku Guide at https://raku.guide </li> <li>Advanced Bash-Scripting Guide </li> <li>How CPUs work at https://cpu.land</li> -<li>Raku Guide at https://raku.guide </li> </ul><br /> <h2 style='display: inline' id='podcasts'>Podcasts</h2><br /> <br /> @@ -194,21 +194,21 @@ <span>In random order:</span><br /> <br /> <ul> -<li>Wednesday Wisdom</li> -<li>Fork Around And Find Out</li> -<li>The ProdCast (Google SRE Podcast)</li> -<li>The Changelog Podcast(s)</li> -<li>Modern Mentor</li> +<li>Dev Interrupted</li> <li>Backend Banter</li> -<li>Pratical AI</li> -<li>Maintainable</li> -<li>Hidden Brain</li> <li>BSD Now [BSD]</li> +<li>Maintainable</li> +<li>The ProdCast (Google SRE Podcast)</li> <li>Fallthrough [Golang]</li> -<li>Dev Interrupted</li> +<li>Wednesday Wisdom</li> +<li>The Changelog Podcast(s)</li> +<li>Fork Around And Find Out</li> +<li>Cup o' Go [Golang]</li> <li>Deep Questions with Cal Newport</li> +<li>Hidden Brain</li> +<li>Pratical AI</li> <li>The Pragmatic Engineer Podcast</li> -<li>Cup o' Go [Golang]</li> +<li>Modern Mentor</li> </ul><br /> <h3 style='display: inline' id='podcasts-i-liked'>Podcasts I liked</h3><br /> <br /> @@ -218,8 +218,8 @@ <li>Go Time (predecessor of fallthrough)</li> <li>CRE: Chaosradio Express [german]</li> <li>Ship It (predecessor of Fork Around And Find Out)</li> -<li>Java Pub House</li> <li>FLOSS weekly</li> +<li>Java Pub House</li> <li>Modern Mentor</li> </ul><br /> <h2 style='display: inline' id='newsletters-i-like'>Newsletters I like</h2><br /> @@ -227,28 +227,28 @@ <span>This is a mix of tech and non-tech newsletters I am subscribed to. In random order:</span><br /> <br /> <ul> -<li>Monospace Mentor</li> -<li>Golang Weekly</li> -<li>The Imperfectionist</li> +<li>Changelog News</li> +<li>The Valuable Dev</li> <li>Andreas Brandhorst Newsletter (Sci-Fi author)</li> +<li>The Imperfectionist</li> +<li>Golang Weekly</li> +<li>Monospace Mentor</li> <li>Ruby Weekly</li> -<li>Changelog News</li> +<li>Applied Go Weekly Newsletter</li> +<li>The Pragmatic Engineer</li> <li>VK Newsletter</li> <li>byteSizeGo</li> -<li>The Pragmatic Engineer</li> -<li>The Valuable Dev</li> <li>Register Spill</li> -<li>Applied Go Weekly Newsletter</li> </ul><br /> <h2 style='display: inline' id='magazines-i-liked'>Magazines I like(d)</h2><br /> <br /> <span>This is a mix of tech I like(d). I may not be a current subscriber, but now and then, I buy an issue. In random order:</span><br /> <br /> <ul> -<li>LWN (online only)</li> -<li>freeX (not published anymore)</li> <li>Linux User</li> +<li>LWN (online only)</li> <li>Linux Magazine</li> +<li>freeX (not published anymore)</li> </ul><br /> <h1 style='display: inline' id='formal-education'>Formal education</h1><br /> <br /> diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml index 36f04d8e..d2f437a8 100644 --- a/gemfeed/atom.xml +++ b/gemfeed/atom.xml @@ -1,6 +1,6 @@ <?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom"> - <updated>2025-10-02T11:27:20+03:00</updated> + <updated>2025-10-02T11:30:14+03:00</updated> <title>foo.zone feed</title> <subtitle>To be in the .zone!</subtitle> <link href="https://foo.zone/gemfeed/atom.xml" rel="self" /> @@ -20,6 +20,8 @@ <div xmlns="http://www.w3.org/1999/xhtml"> <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-7-k3s-and-first-pod-deployments'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</h1><br /> <br /> +<span class='quote'>Published at 2025-10-02T11:27:19+03:00</span><br /> +<br /> <span>This is the seventh blog post about the f3s series for my self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.</span><br /> <br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> diff --git a/gemfeed/examples/conf/README.md b/gemfeed/examples/conf/README.md deleted file mode 100644 index b0f5d08a..00000000 --- a/gemfeed/examples/conf/README.md +++ /dev/null @@ -1,9 +0,0 @@ -conf -==== - -My personal config repositories. Including - -* rexfiles -* k8s/helm manifests -* some docker files -* RCM files (soon?) diff --git a/gemfeed/examples/conf/Rexfile b/gemfeed/examples/conf/Rexfile deleted file mode 100644 index 74260007..00000000 --- a/gemfeed/examples/conf/Rexfile +++ /dev/null @@ -1,3 +0,0 @@ -require for <'*/Rexfile'>; - -# vim: syntax=perl diff --git a/gemfeed/examples/conf/babylon5/README.md b/gemfeed/examples/conf/babylon5/README.md deleted file mode 100644 index 58a0a47e..00000000 --- a/gemfeed/examples/conf/babylon5/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Babylon5 - -Some backup of some Docker start scripts of my `babylon5.buetow.org` server, which I deleted as I moved off all containers to AWS ECS Fargate/Terraform https://codeberg.org/snonux/terraform ! diff --git a/gemfeed/examples/conf/babylon5/backup-start b/gemfeed/examples/conf/babylon5/backup-start deleted file mode 100755 index c616ba09..00000000 --- a/gemfeed/examples/conf/babylon5/backup-start +++ /dev/null @@ -1,64 +0,0 @@ -#!/usr/bin/bash - -set -euf -o pipefail -declare -r DATE=$(date +%d) - -ensure_directory () { - local -r dir="$1"; shift - - if [ ! -d "$dir" ]; then - mkdir "$dir" - chmod 700 "$dir" - fi -} - -get_docker_id () { - local -r image="$1"; shift - docker ps | awk -v image="$image" '$2 == image { print $1 }' -} - -backup_wallabag () { - ensure_directory /opt/backup/wallabag - local -r container="$(get_docker_id 'wallabag/wallabag')" - docker stop "$container" - tar -hcvpf /opt/backup/wallabag/wallabag.tar.gz.tmp /opt/wallabag && - mv /opt/backup/wallabag/wallabag.tar.gz.tmp /opt/backup/wallabag/wallabag-$DATE.tar.gz && - touch /opt/backup/wallabag.lastrun - docker start "$container" -} - -backup_vaultwarden () { - ensure_directory /opt/backup/vaultwarden - local -r container="$(get_docker_id 'vaultwarden/server:latest')" - docker stop "$container" - tar -hcvpf /opt/backup/vaultwarden/vaultwarden.tar.gz.tmp /opt/vaultwarden && - mv /opt/backup/vaultwarden/vaultwarden.tar.gz.tmp /opt/backup/vaultwarden/vaultwarden-$DATE.tar.gz && - touch /opt/backup/vaultwarden.lastrun - docker start "$container" -} - -backup_anki () { - ensure_directory /opt/backup/anki-sync-server - local -r container="$(get_docker_id 'anki-sync-server:latest')" - docker stop "$container" - tar -hcvpf /opt/backup/anki-sync-server/anki-sync-server.tar.gz.tmp /opt/anki-sync-server && - mv /opt/backup/anki-sync-server/anki-sync-server.tar.gz.tmp \ - /opt/backup/anki-sync-server/anki-sync-server-$DATE.tar.gz && - touch /opt/backup/anki-sync-server.lastrun - docker start "$container" -} - -backup_audiobookshelf_meta () { - ensure_directory /opt/backup/audiobookshelf - rsync -avz -delete /opt/audiobookshelf/metadata/backups/ /opt/backup/audiobookshelf -} - -backup_wallabag -backup_vaultwarden -backup_anki -backup_audiobookshelf_meta - -chgrp -R backup /opt/backup/ -find -L /opt/backup -mindepth 2 -type f -exec chmod 640 "{}" \; -find -L /opt/backup -mindepth 2 -type d -exec chmod 750 "{}" \; -chmod 755 /opt/backup/nextcloud/borg diff --git a/gemfeed/examples/conf/babylon5/docker-start-anki-sync-server b/gemfeed/examples/conf/babylon5/docker-start-anki-sync-server deleted file mode 100755 index a6b3930a..00000000 --- a/gemfeed/examples/conf/babylon5/docker-start-anki-sync-server +++ /dev/null @@ -1,4 +0,0 @@ -#!/usr/bin/bash - -set -x -docker run -d --name anki-sync-server --user nobody --restart always -v /opt/anki-sync-server/data:/data -p 83:27701 anki-sync-server:latest diff --git a/gemfeed/examples/conf/babylon5/docker-start-audiobookshelf b/gemfeed/examples/conf/babylon5/docker-start-audiobookshelf deleted file mode 100755 index 404c787c..00000000 --- a/gemfeed/examples/conf/babylon5/docker-start-audiobookshelf +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/bash - -set -x - -docker pull ghcr.io/advplyr/audiobookshelf -docker run -d \ - -p 13378:80 \ - -v /opt/audiobookshelf/config:/config \ - -v /opt/audiobookshelf/metadata:/metadata \ - -v /opt/audiobookshelf/audiobooks:/audiobooks \ - -v /opt/audiobookshelf/podcasts:/podcasts \ - --name audiobookshelf ghcr.io/advplyr/audiobookshelf diff --git a/gemfeed/examples/conf/babylon5/docker-start-nextcloud-aio b/gemfeed/examples/conf/babylon5/docker-start-nextcloud-aio deleted file mode 100755 index 0a66afb7..00000000 --- a/gemfeed/examples/conf/babylon5/docker-start-nextcloud-aio +++ /dev/null @@ -1,15 +0,0 @@ -#!/usr/bin/bash - -set -x - -sudo docker run \ - --sig-proxy=false \ - --name nextcloud-aio-mastercontainer \ - --restart always \ - --publish 8080:8080 \ - -e APACHE_PORT=82 \ - -e APACHE_IP_BINDING=0.0.0.0 \ - -e NEXTCLOUD_DATADIR=/opt/nextcloud/ncdata \ - --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \ - --volume /var/run/docker.sock:/var/run/docker.sock:ro \ - nextcloud/all-in-one:latest diff --git a/gemfeed/examples/conf/babylon5/docker-start-vaultwarden b/gemfeed/examples/conf/babylon5/docker-start-vaultwarden deleted file mode 100755 index 15e1f93a..00000000 --- a/gemfeed/examples/conf/babylon5/docker-start-vaultwarden +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/bash - -set -x - -# docker pull vaultwarden/server:latest -docker run -d \ - --restart always \ - --name vaultwarden \ - --volume /opt/vaultwarden/data/:/data/ \ - --publish 90:80 vaultwarden/server:latest diff --git a/gemfeed/examples/conf/babylon5/docker-start-wallabag b/gemfeed/examples/conf/babylon5/docker-start-wallabag deleted file mode 100755 index e0656d55..00000000 --- a/gemfeed/examples/conf/babylon5/docker-start-wallabag +++ /dev/null @@ -1,4 +0,0 @@ -#!/usr/bin/bash - -set -x -docker run -d --restart always -v /opt/wallabag/data:/var/www/wallabag/data -v /opt/wallabag/images:/var/www/wallabag/web/assets/images -p 81:80 -e "SYMFONY__ENV__DOMAIN_NAME=https://bag.buetow.org" wallabag/wallabag diff --git a/gemfeed/examples/conf/dotfiles/README.md b/gemfeed/examples/conf/dotfiles/README.md deleted file mode 100644 index 6fdd2c25..00000000 --- a/gemfeed/examples/conf/dotfiles/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# dotfiles - -These are all my dotfiles. I can install them locally on my laptop and/or workstation as well as remotely on any server. - -For local installation, also have a read through https://blog.ferki.it/2023/08/11/local-management-with-rex/ diff --git a/gemfeed/examples/conf/dotfiles/Rexfile b/gemfeed/examples/conf/dotfiles/Rexfile deleted file mode 100644 index e0e002e5..00000000 --- a/gemfeed/examples/conf/dotfiles/Rexfile +++ /dev/null @@ -1,225 +0,0 @@ -use Rex -feature => [ '1.14', 'exec_autodie' ]; -use Rex::Logger; - -our $HOME = $ENV{HOME}; - -# In a public Git rapository. -our $DOT = "$HOME/git/conf/dotfiles"; - -# In a private Git repository. -our $DOT_PRIVATE = "$HOME/git/conf_private/dotfiles"; - -sub ensure_dir { - my ( $src_glob, $dst_dir, $file_mode ) = @_; - Rex::Logger::info("Ensure dir glob $src_glob"); - - file $dst_dir, - ensure => 'directory', - mode => '0700'; - - file "$dst_dir/" . basename($_), - ensure => 'present', - source => $_, - mode => $file_mode // '0640' - for glob $src_glob; -} - -sub ensure_file { - my ( $src_file, $dst_file, $file_mode ) = @_; - - file $dst_file, - ensure => 'present', - source => $src_file, - mode => $file_mode // '0640'; -} - -sub ensure { - my ( $src, $dst, $mode ) = @_; - ( $dst =~ /\/$/ ? \&ensure_dir : \&ensure_file )->( $src, $dst, $mode ); -} - -desc 'Install packages on Termux'; -task 'pkg_termux', sub { - my @pkgs = qw/ - ack-grep - ctags - fzf - golang - htop - make - nodejs - ripgrep - rsync - ruby - starship - tig - /; - - for my $pkg (@pkgs) { - Rex::Logger::info("Installing package $pkg"); - pkg $pkg, ensure => 'installed'; - } -}; - -desc 'Install packages on FreeBSD'; -task 'pkg_freebsd', sub { - my @pkgs = qw/ - bat - ctags - fzf - gmake - go - gron - htop - lynx - node - p5-ack - ripgrep - starship - tig - tmux - /; - - for my $pkg (@pkgs) { - Rex::Logger::info("Installing package $pkg"); - pkg $pkg, ensure => 'installed'; - } -}; - -desc 'Install packages on Fedora Linux'; -task 'pkg_fedora', sub { - my @pkgs = qw/ - opendoas - fd-find - nodejs-bash-language-server - fortune-mod - syncthing - ncdu - ack - fish - bat - ctags - fzf - golang - golang-x-tools-gopls - gpaste - gron - htop - java-latest-openjdk-devel - lynx - make - nodejs - perl-File-Slurp - procs - rakudo - Rex - ripgrep - ruby - strace - task2 - tig - tmux - dialect - chromium - strawberry - gnumeric - sway-config-fedora - sway - waybar - zathura - /; - - for my $pkg (@pkgs) { - Rex::Logger::info("Installing package $pkg"); - pkg $pkg, ensure => 'installed'; - } -}; - -desc 'Install ~/.config/helix'; -task 'home_helix', sub { ensure "$DOT/helix/*" => "$HOME/.config/helix/" }; - -desc 'Install ~/.config/ghostty'; -task 'home_ghostty', sub { ensure "$DOT/ghostty/*" => "$HOME/.config/ghostty/" }; - -desc 'Install ~/scripts'; -task 'home_scripts', sub { ensure "$DOT/scripts/*" => "$HOME/scripts/", '0750' }; - -desc 'Install ~/.ssh files'; -task 'home_ssh', sub { ensure "$DOT/ssh/config" => "$HOME/.ssh/config", '0600' }; - -desc 'Install BASH configuration'; -task 'home_bash', sub { - ensure "$DOT/bash/bash_profile" => "$HOME/.bash_profile"; - ensure "$DOT/bash/bashrc" => "$HOME/.bashrc"; -}; - -desc 'Install fish configuration'; -task 'home_fish', sub { - - # ensure "$DOT/fish/conf.d/*" => "$HOME/.config/fish/conf.d/"; - my $dest_dir = "$HOME/.config/fish/conf.d"; - if ( !-l $dest_dir ) { - if ( -d $dest_dir ) { - rename $dest_dir, "$dest_dir.old" or die "Could not rename $dest_dir: $!"; - } - symlink "$DOT/fish/conf.d" => $dest_dir or die "Could not create symlink: $!"; - } -}; - -desc 'Install gitsyncer configuration'; -task 'home_gitsyncer', sub { - my $dest_dir = "$HOME/.config/gitsyncer"; - symlink "$DOT/gitsyncer/" => $dest_dir or die "Could not create symlink: $!"; -}; - -sub isFileSymlink() { - my $file = shift; - return -l $file && -e $file; -} - -desc 'Vale and proselint'; -task 'home_vale', sub { - ensure "$DOT/vale.ini" => "$HOME/.vale.ini"; - say 'Now you can run "vale sync"'; -}; - -desc 'Install tmux configuration'; -task 'home_tmux', sub { - ensure "$DOT/tmux/*" => "$HOME/.config/tmux/"; -}; - -desc 'Install Sway configuration'; -task 'home_sway', sub { - ensure "$DOT/sway/config.d/*" => "$HOME/.config/sway/config.d/"; - ensure "$DOT/waybar/*" => "$HOME/.config/waybar/"; -}; - -desc 'Install my signature'; -task 'home_signature', sub { - ensure "$DOT/signature" => "$HOME/.signature"; -}; - -desc 'Install my calendar files'; -task 'home_calendar', sub { - unless ( -d $DOT_PRIVATE ) { - Rex::Logger::info( "$DOT_PRIVATE not there, skipping task", 'warn' ); - } - else { - ensure "$DOT_PRIVATE/calendar/*" => "$HOME/.calendar/"; - } -}; - -desc 'Install my Pipewire tuned for High-Res config'; -task 'home_pipewire', sub { - file "$HOME/.config/pipewire" => ensure => 'directory', - mode => '0750'; - ensure - "$DOT/pipewire/pipewire.conf" => "$HOME/.config/pipewire/pipewire.conf", - '0600'; -}; - -desc 'Install all my ~ files'; -task 'home', sub { - require Rex::TaskList; - run_task $_ for Rex::TaskList->create()->get_all_tasks('^home_'); -}; diff --git a/gemfeed/examples/conf/dotfiles/bash/bash_profile b/gemfeed/examples/conf/dotfiles/bash/bash_profile deleted file mode 100644 index 004a7b32..00000000 --- a/gemfeed/examples/conf/dotfiles/bash/bash_profile +++ /dev/null @@ -1,3 +0,0 @@ -if [ -f $HOME/.bashrc ]; then - source $HOME/.bashrc -fi diff --git a/gemfeed/examples/conf/dotfiles/bash/bashrc b/gemfeed/examples/conf/dotfiles/bash/bashrc deleted file mode 100644 index ec2b10c3..00000000 --- a/gemfeed/examples/conf/dotfiles/bash/bashrc +++ /dev/null @@ -1,15 +0,0 @@ -# If shell is interactive -if [[ ! -z "$PS1" && ! -f $HOME/.nofish ]]; then - # Use fish if it's installed - if [ -e /opt/local/bin/fish ]; then - exec /opt/local/bin/fish - elif [ -e /bin/fish ]; then - exec /bin/fish - elif [ -e /usr/bin/fish ]; then - exec /usr/bin/fish - elif [ -e /data/data/com.termux/files/usr/bin/fish ]; then - exec /data/data/com.termux/files/usr/bin/fish - fi - - echo 'I might want to install fish on this host' -fi diff --git a/gemfeed/examples/conf/dotfiles/claude/CLAUDE.md b/gemfeed/examples/conf/dotfiles/claude/CLAUDE.md deleted file mode 100644 index ffda0b71..00000000 --- a/gemfeed/examples/conf/dotfiles/claude/CLAUDE.md +++ /dev/null @@ -1,2 +0,0 @@ -- Whenever updating code, also update the comments in the code to reflect the reality and the reasoning. -- When a function reaches 50 lines of code or more, try to refactor it into several functions of about 30 lines each. In case of a go project, when main.go becomes too large, move code into the ./internal package. diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/ai.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/ai.fish deleted file mode 100644 index 23ce2b20..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/ai.fish +++ /dev/null @@ -1,39 +0,0 @@ -abbr -a gpt chatgpt -abbr -a gpti "chatgpt --interactive" -abbr -a suggest hexai -abbr -a explain 'hexai explain' -abbr -a aic 'aichat -e' - -# helix-gpt env vars used -# set -gx COPILOT_MODEL gpt-4.1 # can be changed with aimodels function -set -gx COPILOT_MODEL gpt-4o # can be changed with aimodels function -set -gx HANDLER copilot - -# TODO: also reconfigure aichat tool using this function -function aimodels - # nvim for the ai tool wrapper so i can use Copilot Chat from the command line. - set -l NVIM_DIR "$HOME/.config/nvim/" - set -l COPILOT_CHAT_DIR "$NVIM_DIR/pack/copilotchat/start/CopilotChat.nvim/lua/CopilotChat" - - printf "gpt-4o -gpt-5 -gpt-o3 -gpt-4.1 -claude-3.7-sonnet -claude-3.7-sonnet-thought -claude-4.0-sonnet -gemini-2.5-pro" >~/.aimodels - - set -gx COPILOT_MODEL (cat ~/.aimodels | fzf) - set -gx OPENAI_MODEL $COPILOT_MODEL - - if test -d $COPILOT_CHAT_DIR - set -l model_config "$COPILOT_CHAT_DIR/config-$COPILOT_MODEL.lua" - if test -f "$model_config" - echo "Using CopilotChat config from $model_config" - cp -v $model_config "$COPILOT_CHAT_DIR/config.lua" - else - echo "No config found at $model_config" - end - end -end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/alternatives.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/alternatives.fish deleted file mode 100644 index 491cf1fe..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/alternatives.fish +++ /dev/null @@ -1,17 +0,0 @@ -if type -q bat - alias Cat=/usr/bin/cat - alias cat=bat -end -if type -q see - alias ca=see -end -if type -q bit - alias Git=/usr/bin/git - alias git=bit -end -if type -q procs - alias p='procs' -end -if type -q carl - alias cal='carl' -end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/config.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/config.fish deleted file mode 100644 index 670ca861..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/config.fish +++ /dev/null @@ -1,31 +0,0 @@ -fish_vi_key_bindings - -# Add paths to PATH -set -U fish_user_paths ~/bin ~/scripts ~/go/bin ~/.cargo/bin $fish_user_paths - -if command -q -v doas >/dev/null - abbr -a s doas -else - abbr -a s sudo -end - -abbr -a g 'grep -E -i' -abbr -a no 'grep -E -i -v' -abbr -a not 'grep -E -i -v' -abbr -a gl 'git log --pretty=oneline --graph --decorate --all' -abbr -a gp 'begin; git commit -a; and git pull; and git push; end' - -for dir in ~/.config/fish/conf.d.work ~/.config/fish/conf.d.local - if test -d $dir - for file in $dir/*.fish - source $file - end - end -end - -if test -d /home/linuxbrew/.linuxbrew - if status is-interactive - # Commands to run in interactive sessions can go here - end - eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" -end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/dotfiles.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/dotfiles.fish deleted file mode 100644 index 6304d321..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/dotfiles.fish +++ /dev/null @@ -1,48 +0,0 @@ -set -gx DOTFILES_DIR ~/git/rexfiles/dotfiles - -function dotfiles::update - set -l prev_pwd (pwd) - cd $DOTFILES_DIR - rex home - cd "$prev_pwd" -end - -function dotfiles::update::git - set -l prev_pwd (pwd) - cd $DOTFILES_DIR - git pull - git commit -a - git push - rex home - cd "$prev_pwd" -end - -function dotfiles::fuzzy::edit - set -l prev_pwd (pwd) - cd $DOTFILES_DIR - set -l dotfile (find . -type f -not -path '*/.git/*' | fzf) - $EDITOR "$dotfile" - if echo "$dotfile" | grep -F -q .fish - echo "Sourcing $dotfile" - source "$dotfile" - end - cd "$prev_pwd" -end - -function dotfiles::rexify - cd $DOTFILES_DIR - rex home - cd - -end - -function dotfiles::random::edit - $EDITOR (find $DOTFILES_DIR -type f -not -path '*/.git/*' | shuf -n 1) -end - -abbr -a .u 'dotfiles::update' -abbr -a .ug 'dotfiles::update::git' -abbr -a .e 'dotfiles::fuzzy::edit' -abbr -a .rex 'dotfiles::rexify' -abbr -a .re 'dotfiles::random::edit' -abbr -a cdconf "cd $HOME/git/conf" -abbr -a cdotfiles "cd $HOME/git/conf/dotfiles" diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/editor.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/editor.fish deleted file mode 100644 index bda46448..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/editor.fish +++ /dev/null @@ -1,44 +0,0 @@ -set -gx EDITOR hx -set -gx VISUAL $EDITOR -set -gx GIT_EDITOR $EDITOR -set -gx HELIX_CONFIG_DIR $HOME/.config/helix - -function editor::helix::open_with_lock - set -l file $argv[1] - set -l lock "$file.lock" - if test -f "$lock" - echo "File lock $lock exists! Another instance is editing it?" - return 2 - end - touch $lock - hx $file $argv[2..-1] - rm $lock -end - -function editor::helix::open_with_lock::force - set -l file $argv[1] - set -l lock "$file.lock" - if test -f "$lock" - echo "File lock $lock exists! Force deleting it and terminating all $EDITOR instances?" - rm -f $lock - pkill -f $EDITOR - end - touch $lock - hx $file $argv[2..-1] - rm $lock -end - -function editor::helix::edit::remote - set -l local_path $argv[1] - set -l remote_uri $argv[2] - scp $local_path $remote_uri; or return 1 - echo "LOCAL_PATH=$local_path; REMOTE_URI=$remote_uri" >~/.hx.remote.source - hx $local_path -end - -abbr -a lhx 'editor::helix::open_with_lock' -abbr -a hxl 'editor::helix::open_with_lock' -abbr -a hxlf 'editor::helix::open_with_lock::force' -abbr -a lhxf 'editor::helix::open_with_lock::force' -abbr -a rhx 'editor::helix::edit::remote' -abbr -a x hx diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/fuzzy.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/fuzzy.fish deleted file mode 100644 index 7683a0e7..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/fuzzy.fish +++ /dev/null @@ -1,5 +0,0 @@ -function __tv_git - tv git-repos -end - -bind \cg __tv_git diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/games.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/games.fish deleted file mode 100644 index 291a798f..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/games.fish +++ /dev/null @@ -1,15 +0,0 @@ -function games::colorscript - if test -e ~/git/shell-color-scripts - cd ~/git/shell-color-scripts - set -x DEV 1 - ./colorscript.sh --random - cd - - else - echo 'No colorscripts installed. Go to:' - echo ' https://gitlab.com/dwt1/shell-color-scripts' - end -end - -if not test -f ~/.colorscript.disable - games::colorscript -end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/gos.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/gos.fish deleted file mode 100644 index a23d7a7b..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/gos.fish +++ /dev/null @@ -1,6 +0,0 @@ -set -x GOS_BIN ~/go/bin/gos -set -x GOS_DIR ~/.gosdir - -if test -f $GOS_BIN - alias cdgos "cd $GOS_DIR" -end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/k8s.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/k8s.fish deleted file mode 100644 index ee1584bf..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/k8s.fish +++ /dev/null @@ -1,76 +0,0 @@ -function kcompletions - if command -q -v kubectl >/dev/null - kubectl completion fish | source - end -end - -# Check if the directory $HOME/.krew exists and update PATH -if test -d $HOME/.krew - set -x PATH (set -q KREW_ROOT; and echo $KREW_ROOT; or echo $HOME/.krew)/bin $PATH -end - -function kpod - set pattern "." - if test -n "$argv[1]" - set pattern "$argv[1]" - end - set -gx POD (kubectl get pods | grep "$pattern" | sort -R | head -n 1 | cut -d' ' -f1) - echo "Pod is $POD" -end - -function klogsf - if test -z "$POD" -o -n "$argv[1]" - kpod $argv - end - kubectl logs -f $POD -end - -function klogs - if test -z "$POD" -o -n "$argv[1]" - kpod $argv - end - kubectl logs $POD -end - -function kbash - if test -z "$POD" -o -n "$argv[1]" - kpod $argv - end - kubectl exec -it $POD -- /bin/bash -end - -function kshell - if test -z "$POD" -o -n "$argv[1]" - kpod $argv - end - kubectl exec -it $POD -- /bin/sh -end - -function kdesc - if test -z "$POD" -o -n "$argv[1]" - kpod $argv - end - kubectl describe pod $POD -end - -function kedit - if test -z "$POD" -o -n "$argv[1]" - kpod $argv - end - kubectl edit pod $POD -end - -function k8s::kubectl::config::contexts - kubectl config get-contexts | sed '1d; /\*/d' | awk '{ print $1 }' | sort -end -alias kcontexts="k8s::kubectl::config::contexts" - -function k8s::kubectl::config::use_context - kubectl config use-context (kubectl config get-contexts | sed '1d; /\*/d' | awk '{ print $1 }' | sort | fzf) -end -alias kcontext="k8s::kubectl::config::use_context" - -function k8s::kubectl::config::set_namespace - kubectl config set-context --current --namespace=(kubectl get ns | sed 1d | awk '{ print $1 }' | sort | fzf) -end -alias knamespace="k8s::kubectl::config::set_namespace" diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/quickedit.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/quickedit.fish deleted file mode 100644 index c722acc6..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/quickedit.fish +++ /dev/null @@ -1,93 +0,0 @@ -set -gx QUICKEDIT_DIR ~/QuickEdit - -function quickedit::postaction - set -l file_path $argv[1] - set -l make_run 0 - - if test -f Makefile - make - set make_run 1 - end - - # Go to git toplevel dir (if exists) - cd (dirname $file_path) - set -l git_dir (git rev-parse --show-toplevel 2>/dev/null) - if test $status -eq 0 - cd $git_dir - end - if not test $make_run -eq 1 - if test -f Makefile - make - end - end - if test -d .git - git commit -a -m Update - git pull - git push - end -end - -function quickedit - set -l prev_dir (pwd) - set -l grep_pattern . - - if test (count $argv) -gt 0 - set grep_pattern $argv[1] - end - - cd $QUICKEDIT_DIR - set files (find -L . -type f -not -path '*/.*' | grep -E "$grep_pattern") - - switch (count $files) - case 0 - echo No result found - return - case 1 - set file_path $files[1] - case '*' - set file_path (printf '%s\n' $files | fzf) - end - - if editor::helix::open_with_lock $file_path - quickedit::postaction $file_path - end - - cd $prev_dir -end - -function quickedit::direct - set -l dir $argv[1] - set -l file $argv[2] - cd $dir - - if editor::helix::open_with_lock $file - quickedit::postaction $file - end - - cd - -end - -function quickedit::scratchpad - quickedit::direct ~/Notes Scratchpad.md -end - -function quickedit::quicknote - quickedit::direct ~/Notes QuickNote.md -end - -function quickedit::performance - quickedit::direct ~/Notes Performance.md -end - -abbr -a e quickedit -abbr -a scratch quickedit::scratchpad -abbr -a S quickedit::scratchpad -abbr -a quicknote quickedit::quicknote -abbr -a performance quickedit::performance -abbr -a goals quickedit::performance -abbr -a er "ranger $QUICKEDIT_DIR" -abbr -a cdquickedit "cd $QUICKEDIT_DIR" -abbr -a cdnotes 'cd ~/Notes' -abbr -a cdfish 'cd ~/.config/fish/conf.d' -abbr -a cddocs 'cd ~/Documents' -abbr -a cdocs 'cd ~/Documents' diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/supersync.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/supersync.fish deleted file mode 100644 index 356f773f..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/supersync.fish +++ /dev/null @@ -1,114 +0,0 @@ -set -x SUPERSYNC_STAMP_FILE ~/.supersync.last - -# Only sync the HabitsAndQuotes when it's asked for via function parameter -function supersync::worktime - set -l worktime_dir ~/git/worktime - - if not test -d $worktime_dir - echo "Warning: Directory $worktime_dir does not exist" - return 1 - end - cd $worktime_dir - - if test (count $argv) -gt 0 -a $argv[1] = sync_quotes - if test -d ~/Notes/HabitsAndQuotes - echo "" >work-wisdoms.md.tmp - for notes in ~/Notes/HabitsAndQuotes/{Productivity,Mentoring}.md - grep '^\* ' $notes >>work-wisdoms.md.tmp - end - sort -u work-wisdoms.md.tmp >work-wisdoms.md - rm work-wisdoms.md.tmp - git add work-wisdoms.md - grep '^\* ' ~/Notes/HabitsAndQuotes/Exercise.md >exercises.md - git add exercises.md - end - end - - find . -name '*.txt' -exec git add {} \; - find . -name '*.json' -exec git add {} \; - git commit -a -m sync - - git pull origin master - git push origin master - - cd - -end - -function supersync::uprecords - set -l uprecords_dir ~/git/uprecords - set -l uprecords_repo git@codeberg.org:snonux/uprecords.git - - if not test -d $uprecords_dir - git clone $uprecords_repo $uprecords_dir - cd $uprecords_dir - else - cd $uprecords_dir - git pull - end - - make update - git commit -a -m Update - git push - cd - -end - -function supersync::taskwarrior - if test -f ~/scripts/taskwarriorfeeder.rb - ruby ~/scripts/taskwarriorfeeder.rb - else - echo "No taskwarrior feeder script, skipping" - end - - taskwarrior::export - taskwarrior::export::gos - taskwarrior::import -end - -function supersync::gitsyncer - set enable_file ~/.gitsyncer_enable - set now (date +%s) - set weekly_interval (math 7 \* 24 \* 60 \* 60) - - if not test -f $enable_file - echo $now >$enable_file - else - set last_run (cat $enable_file) - if test (math $now - $last_run) -lt $weekly_interval - return - end - end - - if test -f ~/go/bin/gitsyncer - ~/go/bin/gitsyncer sync bidirectional && ~/go/bin/gitsyncer showcase - end - if test $status -eq 0 - date +%s >$enable_file - end -end - -function supersync - supersync::worktime sync_quotes - supersync::taskwarrior - supersync::worktime no_sync_quotes - supersync::uprecords - supersync::gitsyncer - - if test -f ~/.gos_enable - gos - end - - date +%s >$SUPERSYNC_STAMP_FILE.tmp - mv $SUPERSYNC_STAMP_FILE.tmp $SUPERSYNC_STAMP_FILE -end - -function supersync::is_it_time_to_sync - set -l max_age 86400 - set -l now (date +%s) - if test -f $SUPERSYNC_STAMP_FILE - set -l diff (math $now - (cat $SUPERSYNC_STAMP_FILE)) - if test $diff -lt $max_age - return 0 - end - end - read -P "It's time to run supersync! Run it? (y/n) " answer; and test "$answer" = y; and supersync -end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/taskwarrior.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/taskwarrior.fish deleted file mode 100644 index d3192bcd..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/taskwarrior.fish +++ /dev/null @@ -1,121 +0,0 @@ -function taskwarrior::fuzzy::_select - sed -n '/^[0-9]/p' | sort -rn | fzf | cut -d' ' -f1 -end - -function taskwarrior::fuzzy::find - set -g TASK_ID (task ready | taskwarrior::fuzzy::_select) -end - -function taskwarrior::select - set -l task_id "$argv[1]" - if test -n "$task_id" - set -g TASK_ID "$task_id" - end - if test "$TASK_ID" = - -o -z "$TASK_ID" - taskwarrior::fuzzy::find - end -end - -function taskwarrior::due::count - set -l due_count (task status:pending due.before:now count) - - if test $due_count -gt 0 - echo "There are $due_count tasks due!" - end -end - -function taskwarrior::add::track - if test (count $argv) -gt 0 - task add priority:L +personal +track $argv - else - tasksamurai +track - end -end - -function taskwarrior::add::standup - if test (count $argv) -gt 0 - task add priority:L +work +standup +sre +nosched $argv - task add priority:L +work +standup +storage +nosched $argv - - if test -f ~/git/helpers/jira/jira.rb - echo "Do you want to raise a Jira ticket? (y/n)" - read -l user_input - if test "$user_input" = y - ruby ~/git/helpers/jira/jira.rb --raise "$argv" - end - end - - else - tasksamurai +standup - end -end - -function taskwarrior::add::standup::editor - set -l tmpfile (mktemp /tmp/standup.XXXXXX.txt) - $EDITOR $tmpfile - taskwarrior::add::standup (cat $tmpfile) -end - -function _taskwarrior::set_import_export_tags - if test (uname) = Darwin - set -gx TASK_IMPORT_TAG work - set -gx TASK_EXPORT_TAG personal - else - set -gx TASK_IMPORT_TAG personal - set -gx TASK_EXPORT_TAG work - end -end - -function taskwarrior::export::gos - task +share status:pending export >"$WORKTIME_DIR/tw-gos-export-$(date +%s).json" - yes | task +share status:pending delete -end - -function taskwarrior::export - _taskwarrior::set_import_export_tags - set -l count (task +$TASK_EXPORT_TAG status:pending count) - - if test $count -eq 0 - return - end - - echo "Exporting $count tasks to $TASK_EXPORT_TAG" - task +$TASK_EXPORT_TAG status:pending export >"$WORKTIME_DIR/tw-$TASK_EXPORT_TAG-export-$(date +%s).json" - yes | task +$TASK_EXPORT_TAG status:pending delete -end - -function taskwarrior::import - _taskwarrior::set_import_export_tags - - find $WORKTIME_DIR -name "tw-$TASK_IMPORT_TAG-export-*.json" | while read -l import - task import $import - rm $import - end - - find $WORKTIME_DIR -name "tw-(hostname)-export-*.json" | while read -l import - task import $import - rm $import - end -end - -abbr -a t task -abbr -a L 'task add +log' -abbr -a tlog 'task add +log' -abbr -a log 'task add +log' -abbr -a tdue 'tasksamurai status:pending due.before:now' -abbr -a thome 'tasksamurai +home' -abbr -a tasks 'tasksamurai -track' -abbr -a tread 'tasksamurai +read' -abbr -a track 'taskwarrior::add::track' -abbr -a tra 'taskwarrior::add::track' -abbr -a trat 'timr track' -abbr -a tfind 'taskwarrior::fuzzy::find' -abbr -a ts tasksamurai - -# Virtual standup abbrs -abbr -a V 'taskwarrior::add::standup' -abbr -a Vstorage 'tasksamurai +standup +storage' -abbr -a Vsre 'tasksamurai +standup +sre' -abbr -a Ved 'taskwarrior::add::standup::editor' - -taskwarrior::due::count diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/timr.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/timr.fish deleted file mode 100644 index 4f084454..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/timr.fish +++ /dev/null @@ -1,25 +0,0 @@ -function timr_prompt -d "Display timr timr_status in the prompt" - if command -v timr >/dev/null - set -l timr_status (timr prompt) - if test -n "$timr_status" - set -l icon (string sub -l 1 -- "$timr_status") - set -l time (string sub -s 2 -- "$timr_status") - if test "$icon" = "▶" - set_color green - else - set_color yellow - end - printf '%s' "$icon" - set_color normal - printf ' %s' "$time" - end - end -end - -complete -c timr -n __fish_use_subcommand -a start -d "Start the timer" -complete -c timr -n __fish_use_subcommand -a stop -d "Stop the timer" -complete -c timr -n __fish_use_subcommand -a pause -d "Pause the timer" -complete -c timr -n __fish_use_subcommand -a status -d "Show the timer status" -complete -c timr -n __fish_use_subcommand -a reset -d "Reset the timer" -complete -c timr -n __fish_use_subcommand -a live -d "Show the live timer" -complete -c timr -n __fish_use_subcommand -a prompt -d "Show the prompt status" diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/tmputils.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/tmputils.fish deleted file mode 100644 index 20a122ad..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/tmputils.fish +++ /dev/null @@ -1,54 +0,0 @@ -set -gx TMPUTILS_DIR ~/data/tmp -set -gx TMPUTILS_TMPFILE ~/.tmpfile - -function tmpls - if not test -d $TMPUTILS_DIR - return - end - ls $TMPUTILS_DIR -end - -function tmptee - set -l name $argv[1] - if test -z "$name" - set name (date +%s) - else - set -e argv[1] - end - set -l file "$TMPUTILS_DIR/$name" - if not test -d $TMPUTILS_DIR - mkdir -p $TMPUTILS_DIR - end - tee $argv $file - echo $file >$TMPUTILS_TMPFILE -end - -function tmpcat - set -l name $argv[1] - if test -z "$name" - cat (tmpfile) - return - end - cat "$TMPUTILS_DIR/$name" -end - -function tmpedit - set -l name $argv[1] - if test -z "$name" - $EDITOR (tmpfile) - return - end - $EDITOR "$TMPUTILS_DIR/$name" -end - -function tmpgrep - set -l name $argv[1] - set -e argv[1] - tmcpat $name | grep $argv -end - -function tmpfile - cat $TMPUTILS_TMPFILE -end - -abbr -a cdtmp "cd $TMPUTILS_DIR" diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/tmux.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/tmux.fish deleted file mode 100644 index e65960e0..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/tmux.fish +++ /dev/null @@ -1,94 +0,0 @@ -function _tmux::cleanup_default - tmux list-sessions | string match -r '^T.*: ' | string match -v -r attached | string split ':' | while read -l s - echo "Killing $s" - tmux kill-session -t "$s" - end -end - -function _tmux::connect_command - set -l server_or_pod $argv[1] - if test -z "$TMUX_KEXEC" - echo "ssh -A -t $server_or_pod" - else - echo "kubectl exec -it $server_or_pod -- /bin/bash" - end -end - -function tmux::new - set -l session $argv[1] - _tmux::cleanup_default - if test -z "$session" - tmux::new (string join "" T (date +%s)) - else - tmux new-session -d -s $session - tmux -2 attach-session -t $session || tmux -2 switch-client -t $session - end -end - -function tmux::attach - set -l session $argv[1] - if test -z "$session" - tmux attach-session || tmux::new - else - tmux attach-session -t $session || tmux::new $session - end -end - -function tmux::remote - set -l server $argv[1] - tmux new -s $server "ssh -A -t $server 'tmux attach-session || tmux'" || tmux attach-session -d -t $server -end - -function tmux::search - set -l session (tmux list-sessions | fzf | cut -d: -f1) - if test -z "$TMUX" - tmux attach-session -t $session - else - tmux switch -t $session - end -end - -function tmux::cluster_ssh - if test -f "$argv[1]" - tmux::tssh_from_file $argv[1] - return - end - tmux::tssh_from_argument $argv -end - -function tmux::tssh_from_argument - set -l session $argv[1] - set first_server_or_container $argv[2] - set remaining_servers $argv[3..-1] - if test -z "$first_server_or_container" - set first_server_or_container $session - end - - tmux new-session -d -s $session (_tmux::connect_command "$first_server_or_container") - if not tmux list-session | grep "^$session:" - echo "Could not create session $session" - return 2 - end - for server_or_container in $remaining_servers - tmux split-window -t $session "tmux select-layout tiled; $(_tmux::connect_command "$server_or_container")" - end - tmux setw -t $session synchronize-panes on - tmux -2 attach-session -t $session || tmux -2 switch-client -t $session -end - -function tmux::tssh_from_file - set -l serverlist $argv[1] - set -l session (basename $serverlist | cut -d. -f1) - tmux::tssh_from_argument $session (awk '{ print $1 }' $serverlist | sed 's/.lan./.lan/g') -end - -alias tn 'tmux::new' -alias ta 'tmux::attach' -alias tx 'tmux::remote' -alias ts 'tmux::search' -alias tssh 'tmux::cluster_ssh' -alias tm tmux -alias tl 'tmux list-sessions' -alias foo 'tmux::new foo' -alias bar 'tmux::new bar' -alias baz 'tmux::new baz' diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/update.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/update.fish deleted file mode 100644 index 935b6302..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/update.fish +++ /dev/null @@ -1,75 +0,0 @@ -function update::tools - set pids - - echo "Installing/updating gofumpt" - go install mvdan.cc/gofumpt@latest & - set -a pids $last_pid - - echo "Installing/updating mage" - go install github.com/magefile/mage@latest & - set -a pids $last_pid - - echo "Installing/updating golangci-lint" - go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@latest & - set -a pids $last_pid - - echo "Installing/updating goimports" - go install golang.org/x/tools/cmd/goimports@latest & - set -a pids $last_pid - - for prog in hexai hexai-lsp hexai-tmux-action - echo "Installing/updating $prog from codeberg.org/snonux/hexai/cmd/$prog@latest" - go install codeberg.org/snonux/hexai/cmd/$prog@latest & - set -a pids $last_pid - end - - for prog in tasksamurai timr - echo "Installing/updating $prog from codeberg.org/snonux/$prog/cmd/$prog@latest" - go install codeberg.org/snonux/$prog/cmd/$prog@latest & - set -a pids $last_pid - end - - if test (uname) = Darwin - echo 'Updating cursor-agent on macOS' - cursor-agent update - end - set -a pids $last_pid - - if test (uname) = Linux - echo "Installing/updating tgpt" - go install github.com/aandrew-me/tgpt/v2@latest & - set -a pids $last_pid - - for prog in gos gitsyncer - echo "Installing/updating $prog from codeberg.org/snonux/$prog/cmd/$prog@latest" - go install codeberg.org/snonux/$prog/cmd/$prog@latest - end - - echo "Installing/updating @anthropic-ai/claude-code globally via npm" - doas npm uninstall -g @anthropic-ai/claude-code - doas npm install -g @anthropic-ai/claude-code - - # doas npm uninstall -g @qwen-code/qwen-code@latest - # doas npm install -g @qwen-code/qwen-code@latest - - echo "Installing/updating @openai/codex globally via npm" - doas npm uninstall -g @openai/codex - doas npm install -g @openai/codex - - echo "Installing/updating @google/gemini-cli globally via npm" - doas npm uninstall -g @google/gemini-cli - doas npm install -g @google/gemini-cli - - # echo "Installing/updating @sourcegraph/amp globally via npm" - # doas npm uninstall -g @sourcegraph/amp - # doas npm install -g @sourcegraph/amp - - echo "Installing/updating opencode-ai globally via npm" - doas npm uninstall -g opencode-ai - doas npm install -g opencode-ai - end - - for pid in $pids - wait $pid - end -end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/utils.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/utils.fish deleted file mode 100644 index 0f112177..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/utils.fish +++ /dev/null @@ -1,142 +0,0 @@ -function fullest_h - df -h | sort -n -k 5 -end - -function fullest_i - df -i | sort -n -k 5 -end - -function usortn - sort | uniq -c | sort -n -end - -function asum - awk '{ sum += $1 } END { print sum }' -end - -function stop - set -l service $argv[1] - sudo service $service stop $argv -end - -function start - set -l service $argv[1] - sudo service $service start $argv -end - -function restart - set -l service $argv[1] - sudo service $service restart $argv -end - -function statuss - set -l service $argv[1] - sudo service $service status $argv -end - -function loop - set -l sleep 10 - if set -q SLEEP - set sleep $SLEEP - end - echo "sleep is $sleep" 1>&2 - while true - $argv - sleep $sleep - end -end - -function f - find . -iname "*$argv*" -end - -function random - set -l upto $argv[1] - set -l random (math $RANDOM % $upto) - echo "Sleeping $random seconds" - sleep $random -end - -function dedup - set -l file $argv[1] - if test -z $file - awk '{ if (line[$0] != 42) { print $0 }; line[$0] = 42; }' - else - awk '{ if (line[$0] != 42) { print $0 }; line[$0] = 42; }' $file | sudo tee $file.dedup >/dev/null - if test ! -f $file.dedupbak - sudo mv $file $file.dedupbak - end - sudo mv $file.dedup $file - wc -l $file $file.dedupbak - sudo gzip --best $file.dedupbak & - end -end - -function dedup_no_bak - set -l file $argv[1] - if test -z $file - awk '{ if (line[$0] != 42) { print $0 }; line[$0] = 42; }' - else - awk '{ if (line[$0] != 42) { print $0 }; line[$0] = 42; }' $file | sudo tee $file.dedup >/dev/null - if test ! -f $file.dedupbak - sudo mv $file $file.dedupbak - end - sudo mv $file.dedup $file - wc -l $file $file.dedupbak - sudo rm -v $file.dedupbak & - end -end - -function drop_caches - echo 3 | sudo tee /proc/sys/vm/drop_caches -end - -function ssl_connect - set -l address $argv[1] - openssl s_client -connect $address -end - -function ssl_dates - ssl_connect $argv | openssl x509 -noout -dates -end - -function lastu - last | grep -E -v '(root|cron|nagios)' -end - -function lastl - lastu | less -end - -abbr wetter 'curl http://wttr.in' - -abbr tf terraform - -function touchtype - tt --noskip --noreport --showwpm --bold --theme (tt -list themes | sort -R | head -n1) $argv -end - -function touchtype::quote - while true - touchtype -quotes en - sleep 0.2 - end -end - -abbr typing 'touchtype::quote' - -function sway_config_view - less /etc/sway/config -end - -function ssh::force - set -l server $argv[1] - ssh-keygen -R $server - ssh -A $server -end - -if test -f ~/git/geheim/geheim.rb - function geheim - ruby ~/git/geheim/geheim.rb $argv - end -end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/worktime.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/worktime.fish deleted file mode 100644 index f2f7f5d6..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/worktime.fish +++ /dev/null @@ -1,122 +0,0 @@ -set -gx WORKTIME_DIR ~/git/worktime - -if test (uname) = Darwin -a ! -f ~/.wtloggedin - echo "Warn: Not logged in, run wtlogin" -end - -function worktime - ruby $WORKTIME_DIR/worktime.rb $argv -end - -function worktime::sync - cd $WORKTIME_DIR - git commit -a -m sync - git pull - git push - cd - -end - -function worktime::wisdom_reminder - if test -f $WORKTIME_DIR/work-wisdoms.md - sed -n '/^\* / { s/\* //; p; }' $WORKTIME_DIR/work-wisdoms.md | sort -R | head -n 1 - end -end - -function worktime::report - if test -f ~/.wtloggedin - if test -f ~/.wtmaster - worktime --report | tee $WORKTIME_DIR/report.txt - else - worktime --report - end - worktime::wisdom_reminder - end -end - -function worktime::add - set -l seconds $argv[1] - set -l what $argv[2] - set -l descr $argv[3] - set -l epoch (date +%s) - - if test -z "$what" - set what work - end - - if test -z "$descr" - worktime --add $seconds --epoch $epoch --what $what - else - worktime --add $seconds --epoch $epoch --what $what --descr "$descr" - end - - worktime::report -end - -function worktime::log - set -l seconds $argv[1] - set -l what $argv[2] - set -l epoch (date +%s) - - if test -z "$what" - set what work - end - - worktime --log --epoch $epoch --what $what - worktime::report -end - -function worktime::login - set -l what $argv[1] - if test -z "$what" - set what work - end - touch ~/.wtloggedin - worktime --login --what $what - worktime::wisdom_reminder -end - -function worktime::logout - set -l what $argv[1] - - if test -z "$what" - set what work - end - - if test -f ~/.wtloggedin - rm ~/.wtloggedin - end - - worktime --logout --what $what - worktime::report -end - -function worktime::status - worktime::report - - if test -f ~/.wtloggedin - echo "You are logged in" - set -l num_worklog (ls $WORKTIME_DIR | grep wl- | wc -l) - if test $num_worklog -gt 0 - echo "$num_worklog entries in the worklog" - end - else - echo "You are not logged in" - end -end - -abbr -a cdworktime "cd $WORKTIME_DIR" -abbr -a wt worktime -abbr -a wtedit 'worktime --edit' -abbr -a wtreport 'worktime --report' -abbr -a wtadd 'worktime::add' -abbr -a wtlog 'worktime::log' -abbr -a wtlogin 'worktime::login' -abbr -a wtlogout 'worktime::logout' -abbr -a wtstatus 'worktime::status' -abbr -a wtsync 'worktime::sync' -abbr -a wtf 'worktime --report' -abbr -a random_exercise "sort -R $WORKTIME_DIR/exercises.md | head -n 1" -abbr -a random_exercises "sort -R $WORKTIME_DIR/exercises.md | head -n 10" -abbr -a wl 'task add +work' -abbr -a ql 'task add +personal' -abbr -a pl 'task add +personal' diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/zoxide.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/zoxide.fish deleted file mode 100644 index 8fbd5d61..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/zoxide.fish +++ /dev/null @@ -1,6 +0,0 @@ -if type -q zoxide - echo Sourcing zoxide for fish shell... - zoxide init fish | source -else - echo "zoxide not installed?" -end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/zsh.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/zsh.fish deleted file mode 100644 index 06174d84..00000000 --- a/gemfeed/examples/conf/dotfiles/fish/conf.d/zsh.fish +++ /dev/null @@ -1,12 +0,0 @@ -# To run a ZSH function in fish, you can use the following function. -function Z - touch ~/.nofish - zsh -i -c "$argv" - rm ~/.nofish -end - -function B - touch ~/.nofish - bash -i -c "$argv" - rm ~/.nofish -end diff --git a/gemfeed/examples/conf/dotfiles/ghostty/config b/gemfeed/examples/conf/dotfiles/ghostty/config deleted file mode 100644 index e1095832..00000000 --- a/gemfeed/examples/conf/dotfiles/ghostty/config +++ /dev/null @@ -1,17 +0,0 @@ -window-decoration = true -copy-on-select = true -quick-terminal-position = bottom -quick-terminal-screen = mouse -shell-integration = zsh -bold-is-bright = true - -# Toggle window decorations only works on Linux! -keybind = ctrl+shift+d=toggle_window_decorations -keybind = ctrl+shift+f=toggle_fullscreen -keybind = ctrl+shift+g=reload_config -# Toggle quick terminal only supported for MacOS -keybind = global:ctrl+shift+t=toggle_quick_terminal -keybind = ctrl+shift+c=copy_to_clipboard -keybind = ctrl+shift+v=paste_from_clipboard -keybind = ctrl+shift+w=paste_from_selection - diff --git a/gemfeed/examples/conf/dotfiles/gitsyncer/config.json b/gemfeed/examples/conf/dotfiles/gitsyncer/config.json deleted file mode 100644 index 3ebb7780..00000000 --- a/gemfeed/examples/conf/dotfiles/gitsyncer/config.json +++ /dev/null @@ -1,33 +0,0 @@ -{ - "organizations": [ - { - "host": "git@codeberg.org", - "name": "snonux" - }, - { - "host": "git@github.com", - "name": "snonux" - }, - { - "host": "paul@t450:git", - "backupLocation": true - } - ], - "repositories": [], - "skip_releases": { - "fapi": [ - "0.0.1" - ] - }, - "exclude_from_showcase": [ - "bratwurstmitsenf", - "Adv360-Pro-ZMK", - "katana", - "playground", - "pages", - "nvim" - ], - "exclude_branches": [ - "^codex/" - ] -}
\ No newline at end of file diff --git a/gemfeed/examples/conf/dotfiles/helix/config.toml b/gemfeed/examples/conf/dotfiles/helix/config.toml deleted file mode 100644 index 0d96c3ff..00000000 --- a/gemfeed/examples/conf/dotfiles/helix/config.toml +++ /dev/null @@ -1,87 +0,0 @@ -theme = "adwaita-dark" - -[editor] -bufferline = "always" -rulers = [80, 100, 120, 140] -line-number = "relative" -mouse = true -cursorline = true -cursorcolumn = true -continue-comments = false -completion-timeout = 2000 - -[editor.soft-wrap] -enable = true - -[editor.inline-diagnostics] -# cursor-line = "hint" - -[editor.auto-save] -focus-lost = true -after-delay.timeout = 3000 -after-delay.enable = true - -[editor.statusline] -left = ["version-control", "mode", "spinner", "file-name", "position" ] -center = ["diagnostics"] -right = ["selections", "file-encoding", "file-line-ending", "file-type"] - -[editor.lsp] -display-messages = true -display-inlay-hints = false - -[editor.cursor-shape] -normal = "block" -insert = "underline" -select = "bar" - -[editor.whitespace.render] -space = "none" -tab = "none" -newline = "none" - -[keys.normal] -D = ["ensure_selections_forward", "extend_to_line_end"] -S = ["ensure_selections_forward", "extend_to_line_start"] -0 = ["select_mode", "extend_to_file_start"] -G = ["ensure_selections_forward", "extend_to_file_end"] -"^" = ["move_prev_word_start", "move_next_word_end", "search_selection", "global_search"] -"ret" = "goto_word" - -C-c = "yank_main_selection_to_clipboard" -C-v = { b = "paste_clipboard_before", a = "paste_clipboard_after", r = ":clipboard-paste-replace" } -A-c = "toggle_comments" # Was originally C-c, so mapped to ALT now - -# Helix related helpers -C-h = { c = ":config-open", r = ":config-reload", C = ":run-shell-command cp -v ~/.config/helix/*.toml ~/git/conf/dotfiles/helix/", l = ":open ~/.config/helix/languages.toml", h = ":open ~/git/worktime/HelixCheat.md", L = ":log-open", d = ":theme default" } - -C-r = [ ":config-reload", ":reload-all" ] - -C-u = [ ":write", ":run-shell-command sh -c 'source ~/.hx.remote.source; scp $LOCAL_PATH $REMOTE_URI && echo Uploaded to $REMOTE_URI || echo Failed uploading to $REMOTE_URI'"] - -# Various helpers -C-s = { e = ":set-option soft-wrap.enable true", d = ":set-option soft-wrap.enable false", s = "save_selection" } - -# Buffer stuff -C-q = ":buffer-close" - -# AI commands are good here. -C-p = { c = ":pipe ai correct this sentence and only print out the corrected text", r = ":pipe ai restructure and reword the input and dont leave information out and only print out the new text", a = ":pipe ai rewrite this in a more casual style", n = ":pipe ai these are book notes of mine. correct the grammar and re-organize the notes. use bullet points for short information and whole paragraphs for longer one. the output must be in Gemini Gemtext format with the star * as the bullet point symbol and not the minus - . dont leave out any content.", p = ":pipe ai" } -# Will replace the above -C-a = ":pipe hexai-tmux-action" - -# Git commands -C-g = { d = ":run-shell-command git diff", p = ":run-shell-command git pull", u = ":run-shell-command git push", t = ":run-shell-command tmux new-window -n hx-git-tig tig", c = ":run-shell-command tmux split-window -v 'git commit -a'" } - -# Build commands -C-l = { m = ":run-shell-command make", d = ":run-shell-command go-task dev", r = ":run-shell-command tmux new-window -n hx-go-task-run 'go-task run'" } - -[keys.normal.space] -B = "file_picker_in_current_buffer_directory" -Q = [ ":cd ~/QuickEdit", "file_picker_in_current_directory" ] - -[keys.select] -"{" = "goto_prev_paragraph" -"}" = "goto_next_paragraph" -n = ["extend_search_next", "merge_selections"] -N = ["extend_search_prev", "merge_selections"] diff --git a/gemfeed/examples/conf/dotfiles/helix/languages.toml b/gemfeed/examples/conf/dotfiles/helix/languages.toml deleted file mode 100644 index 60e6a19c..00000000 --- a/gemfeed/examples/conf/dotfiles/helix/languages.toml +++ /dev/null @@ -1,203 +0,0 @@ -[[language]] -name = "hcl" -scope = "source.hcl" -injection-regex = "(hcl|tf|nomad)" -language-id = "terraform" -file-types = ["hcl", "tf", "nomad"] -comment-token = "#" -block-comment-tokens = { start = "/*", end = "*/" } -indent = { tab-width = 2, unit = " " } -language-servers = [ "terraform-ls", "hexai-lsp" ] -auto-format = true - -[[language]] -name = "go" -auto-format= true -diagnostic-severity = "hint" -formatter = { command = "hx.goformatter" } -language-servers = [ "gopls", "golangci-lint-lsp", "hexai-lsp" ] -[language-server.hexai-lsp] -command = "hexai-lsp" - -[language-server.gopls] -command = "gopls" - -[language-server.gopls.config.hints] -assignVariableTypes = true -compositeLiteralFields = true -constantValues = true -functionTypeParameters = true -parameterNames = true -rangeVariableTypes = true - -# go install github.com/nametake/golangci-lint-langserver@latest │ -[language-server.golangci-lint-lsp] -command = "golangci-lint-langserver" - -# golangci-lint-langserver depepds/calls golangci-lint -# go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest -[language-server.golangci-lint-lsp.config] -command = ["golangci-lint", "run", "--issues-exit-code=1"] -# command = ["golangci-lint", "run", "--out-format", "json", "--issues-exit-code=1"] - -[[language]] -name = "c" -scope = "source.c" -injection-regex = "c" -file-types = ["c", "h"] -comment-token = "//" -language-servers = [ "clangd", "hexai-lsp" ] -indent = { tab-width = 2, unit = " " } - -[[grammar]] -name = "c" -source = { git = "https://github.com/tree-sitter/tree-sitter-c", rev = "7175a6dd5fc1cee660dce6fe23f6043d75af424a" } - -[language-server.clangd] -command = "clangd" - -[[language]] -name = "perl" -auto-format= true -formatter = { command = "perltidy", args = ["-l=120"] } -scope = "source.perl" -file-types = ["pl", "pm", "t", "psgi", "raku", "rakumod", "rakutest", "rakudoc", "nqp", "p6", "pl6", "pm6", { glob = "Rexfile" }] -shebangs = ["perl"] -comment-token = "#" -language-servers = [ "perlnavigator", "hexai-lsp" ] -indent = { tab-width = 2, unit = " " } - -[[grammar]] -name = "perl" -source = { git = "https://github.com/tree-sitter-perl/tree-sitter-perl", rev = "e99bb5283805db4cb86c964722d709df21b0ac16" } - -[[language]] -name = "pod" -scope = "source.pod" -injection-regex = "pod" -file-types = ["pod"] - -[[grammar]] -name = "pod" -source = { git = "https://github.com/tree-sitter-perl/tree-sitter-pod", rev = "39da859947b94abdee43e431368e1ae975c0a424" } - -[[language]] -name = "ruby" -auto-format = true -scope = "source.ruby" -injection-regex = "ruby" -file-types = [ - "rb", - "rbs", - "rake", - "irb", - "gemspec", - { glob = "Gemfile" }, - { glob = "Rakefile" } -] -shebangs = ["ruby"] -comment-token = "#" -language-servers = [ "ruby-lsp", "solargraph", "rubocop", "hexai-lsp" ] -indent = { tab-width = 2, unit = " " } - -[[grammar]] -name = "ruby" -source = { git = "https://github.com/tree-sitter/tree-sitter-ruby", rev = "206c7077164372c596ffa8eaadb9435c28941364" } - -[[language]] -name = "bash" -scope = "source.bash" -injection-regex = "(shell|bash|zsh|sh)" -file-types = [ - "sh", - "bash", - "zsh", - "zshenv", - "zlogin", - "zlogout", - "zprofile", - "zshrc", - "eclass", - "ebuild", - "bazelrc", - "Renviron", - "zsh-theme", - "ksh", - "cshrc", - "tcshrc", - "bashrc_Apple_Terminal", - "zshrc_Apple_Terminal", - { glob = "*zshrc*" }, -] -shebangs = ["sh", "bash", "dash", "zsh"] -comment-token = "#" -language-servers = [ "bash-language-server", "hexai-lsp" ] -indent = { tab-width = 2, unit = " " } - -[[language]] -name = "fish" -# scope = "source.fish" -# injection-regex = "(fish)" -# file-types = [ -# "fish", -# ] -# shebangs = ["fish" ] -# comment-token = "#" -language-servers = [ "fish-lsp", "hexai-lsp" ] -# indent =dth = 4, unit = " " } - -[[grammar]] -name = "bash" -source = { git = "https://github.com/tree-sitter/tree-sitter-bash", rev = "275effdfc0edce774acf7d481f9ea195c6c403cd" } - -[language-server] -bash-language-server = { command = "bash-language-server", args = ["start"] } -vale-ls = { command = "vale-ls" } -ruby-lsp = { command = "ruby-lsp"} -rubocop = { command = "rubocop", args = ["--lsp"] } - -[[language]] -name = "markdown" -scope = "source.md" -injection-regex = "md|markdown" -file-types = ["md", "markdown", "mkd", "mdwn", "mdown", "markdn", "mdtxt", "mdtext", "workbook", "gmi", "tpl", "txt" ] -roots = [".marksman.toml"] -language-servers = [ "marksman", "markdown-oxide", "vale-ls", "hexai-lsp"] -indent = { tab-width = 2, unit = " " } - -[[grammar]] -name = "markdown" -source = { git = "https://github.com/MDeiml/tree-sitter-markdown", rev = "aaf76797aa8ecd9a5e78e0ec3681941de6c945ee", subpath = "tree-sitter-markdown" } - -[[language]] -name = "markdown.inline" -scope = "source.markdown.inline" -injection-regex = "markdown\\.inline" -file-types = [] -grammar = "markdown_inline" - -[[grammar]] -name = "markdown_inline" -source = { git = "https://github.com/MDeiml/tree-sitter-markdown", rev = "aaf76797aa8ecd9a5e78e0ec3681941de6c945ee", subpath = "tree-sitter-markdown-inline" } - -[[language]] -name = "gemini" -scope = "source.gmi" -file-types = ["gmi", "tpl"] - -[[grammar]] -name = "gemini" -source = { git = "https://git.sr.ht/~nbsp/tree-sitter-gemini", rev = "3cc5e4bdf572d5df4277fc2e54d6299bd59a54b3" } - -[[language]] -name = "java" -scope = "source.java" -injection-regex = "java" -file-types = ["java", "jav", "pde"] -roots = ["pom.xml", "build.gradle", "build.gradle.kts"] -language-servers = [ "jdtls", "hexai-lsp" ] -indent = { tab-width = 2, unit = " " } - -[[grammar]] -name = "java" -source = { git = "https://github.com/tree-sitter/tree-sitter-java", rev = "09d650def6cdf7f479f4b78f595e9ef5b58ce31e" } diff --git a/gemfeed/examples/conf/dotfiles/nvim/init.lua b/gemfeed/examples/conf/dotfiles/nvim/init.lua deleted file mode 100644 index c3b8701d..00000000 --- a/gemfeed/examples/conf/dotfiles/nvim/init.lua +++ /dev/null @@ -1,70 +0,0 @@ - -require("CopilotChat").setup { - -- See Configuration section for options -} - -local timer = vim.loop.new_timer() -- Initialize the timer - -vim.api.nvim_create_autocmd("BufEnter", { - pattern = "*", - callback = function() - if vim.bo.filetype == "copilot-chat" then - local copilot_chat_buf = vim.api.nvim_get_current_buf() - vim.cmd("wincmd _") -- Maximize height - vim.cmd("wincmd |") -- Maximize width - local file_path = vim.fn.expand("~/.copilot_chat_output.txt") - - -- Start the timer with a 2-second interval - timer:start(1000, 1000, vim.schedule_wrap(function() - if copilot_chat_buf and vim.api.nvim_buf_is_valid(copilot_chat_buf) then - -- Get all lines in the buffer - local lines = vim.api.nvim_buf_get_lines(copilot_chat_buf, 0, -1, false) - - -- Check for the stopping condition - local user_line_count = 0 - for _, line in ipairs(lines) do - if line:find("^## User") then - user_line_count = user_line_count + 1 - if user_line_count >= 2 then - print("Stopping write process: Two '## User' lines detected.") - timer:stop() - -- Write the buffer content to the file - vim.api.nvim_buf_call(copilot_chat_buf, function() - vim.cmd("write! " .. file_path) - end) - vim.cmd("qa!") - return - end - end - end - - -- Write the buffer content to the file - vim.api.nvim_buf_call(copilot_chat_buf, function() - vim.cmd("write! " .. file_path) - end) - end - end)) - end - end, -}) - -vim.api.nvim_create_user_command('CopilotAsk', function(args) - local chat = require("CopilotChat") - local input - if args.args and args.args ~= "" then - input = args.args - else - local input_file = os.getenv("HOME") .. "/.copilot_chat_input.txt" - local file = io.open(input_file, "r") - if file then - input = file:read("*all") - file:close() - else - print("Error: Unable to open input file.") - return - end - end - chat.ask(input) -end, { force = true, range = true, nargs = "?" }) - - diff --git a/gemfeed/examples/conf/dotfiles/pipewire/pipewire.conf b/gemfeed/examples/conf/dotfiles/pipewire/pipewire.conf deleted file mode 100644 index a97c99e7..00000000 --- a/gemfeed/examples/conf/dotfiles/pipewire/pipewire.conf +++ /dev/null @@ -1,257 +0,0 @@ -# Daemon config file for PipeWire version "0.3.51" # -# -# Copy and edit this file in /etc/pipewire for system-wide changes -# or in ~/.config/pipewire for local changes. -# -# It is also possible to place a file with an updated section in -# /etc/pipewire/pipewire.conf.d/ for system-wide changes or in -# ~/.config/pipewire/pipewire.conf.d/ for local changes. -# - -context.properties = { - ## Configure properties in the system. - #library.name.system = support/libspa-support - #context.data-loop.library.name.system = support/libspa-support - #support.dbus = true - #link.max-buffers = 64 - link.max-buffers = 16 # version < 3 clients can't handle more - #mem.warn-mlock = false - #mem.allow-mlock = true - #mem.mlock-all = false - #clock.power-of-two-quantum = true - #log.level = 2 - #cpu.zero.denormals = false - - core.daemon = true # listening for socket connections - core.name = pipewire-0 # core name and socket name - - ## Properties for the DSP configuration. - default.clock.rate = 48000 - default.clock.allowed-rates = [ 44100 48000 88200 96000 176400 192000 352800 384000 ] - #default.clock.quantum = 1024 - default.clock.min-quantum = 16 - #default.clock.max-quantum = 2048 - #default.clock.quantum-limit = 8192 - #default.video.width = 640 - #default.video.height = 480 - #default.video.rate.num = 25 - #default.video.rate.denom = 1 - # - #settings.check-quantum = false - #settings.check-rate = false - # - # These overrides are only applied when running in a vm. - vm.overrides = { - default.clock.min-quantum = 1024 - } -} - -context.spa-libs = { - #<factory-name regex> = <library-name> - # - # Used to find spa factory names. It maps an spa factory name - # regular expression to a library name that should contain - # that factory. - # - audio.convert.* = audioconvert/libspa-audioconvert - api.alsa.* = alsa/libspa-alsa - api.v4l2.* = v4l2/libspa-v4l2 - api.libcamera.* = libcamera/libspa-libcamera - api.bluez5.* = bluez5/libspa-bluez5 - api.vulkan.* = vulkan/libspa-vulkan - api.jack.* = jack/libspa-jack - support.* = support/libspa-support - #videotestsrc = videotestsrc/libspa-videotestsrc - #audiotestsrc = audiotestsrc/libspa-audiotestsrc -} - -context.modules = [ - #{ name = <module-name> - # [ args = { <key> = <value> ... } ] - # [ flags = [ [ ifexists ] [ nofail ] ] - #} - # - # Loads a module with the given parameters. - # If ifexists is given, the module is ignored when it is not found. - # If nofail is given, module initialization failures are ignored. - # - - # Uses realtime scheduling to boost the audio thread priorities. This uses - # RTKit if the user doesn't have permission to use regular realtime - # scheduling. - { name = libpipewire-module-rt - args = { - nice.level = -11 - #rt.prio = 88 - #rt.time.soft = -1 - #rt.time.hard = -1 - } - flags = [ ifexists nofail ] - } - - # The native communication protocol. - { name = libpipewire-module-protocol-native } - - # The profile module. Allows application to access profiler - # and performance data. It provides an interface that is used - # by pw-top and pw-profiler. - { name = libpipewire-module-profiler } - - # Allows applications to create metadata objects. It creates - # a factory for Metadata objects. - { name = libpipewire-module-metadata } - - # Creates a factory for making devices that run in the - # context of the PipeWire server. - { name = libpipewire-module-spa-device-factory } - - # Creates a factory for making nodes that run in the - # context of the PipeWire server. - { name = libpipewire-module-spa-node-factory } - - # Allows creating nodes that run in the context of the - # client. Is used by all clients that want to provide - # data to PipeWire. - { name = libpipewire-module-client-node } - - # Allows creating devices that run in the context of the - # client. Is used by the session manager. - { name = libpipewire-module-client-device } - - # The portal module monitors the PID of the portal process - # and tags connections with the same PID as portal - # connections. - { name = libpipewire-module-portal - flags = [ ifexists nofail ] - } - - # The access module can perform access checks and block - # new clients. - { name = libpipewire-module-access - args = { - # access.allowed to list an array of paths of allowed - # apps. - #access.allowed = [ - # /usr/bin/pipewire-media-session - #] - - # An array of rejected paths. - #access.rejected = [ ] - - # An array of paths with restricted access. - #access.restricted = [ ] - - # Anything not in the above lists gets assigned the - # access.force permission. - #access.force = flatpak - } - } - - # Makes a factory for wrapping nodes in an adapter with a - # converter and resampler. - { name = libpipewire-module-adapter } - - # Makes a factory for creating links between ports. - { name = libpipewire-module-link-factory } - - # Provides factories to make session manager objects. - { name = libpipewire-module-session-manager } - - # Use libcanberra to play X11 Bell - #{ name = libpipewire-module-x11-bell - # args = { - # #sink.name = "" - # #sample.name = "bell-window-system" - # #x11.display = null - # #x11.xauthority = null - # } - #} -] - -context.objects = [ - #{ factory = <factory-name> - # [ args = { <key> = <value> ... } ] - # [ flags = [ [ nofail ] ] - #} - # - # Creates an object from a PipeWire factory with the given parameters. - # If nofail is given, errors are ignored (and no object is created). - # - #{ factory = spa-node-factory args = { factory.name = videotestsrc node.name = videotestsrc Spa:Pod:Object:Param:Props:patternType = 1 } } - #{ factory = spa-device-factory args = { factory.name = api.jack.device foo=bar } flags = [ nofail ] } - #{ factory = spa-device-factory args = { factory.name = api.alsa.enum.udev } } - #{ factory = spa-node-factory args = { factory.name = api.alsa.seq.bridge node.name = Internal-MIDI-Bridge } } - #{ factory = adapter args = { factory.name = audiotestsrc node.name = my-test } } - #{ factory = spa-node-factory args = { factory.name = api.vulkan.compute.source node.name = my-compute-source } } - - # A default dummy driver. This handles nodes marked with the "node.always-driver" - # property when no other driver is currently active. JACK clients need this. - { factory = spa-node-factory - args = { - factory.name = support.node.driver - node.name = Dummy-Driver - node.group = pipewire.dummy - priority.driver = 20000 - } - } - { factory = spa-node-factory - args = { - factory.name = support.node.driver - node.name = Freewheel-Driver - priority.driver = 19000 - node.group = pipewire.freewheel - node.freewheel = true - } - } - # This creates a new Source node. It will have input ports - # that you can link, to provide audio for this source. - #{ factory = adapter - # args = { - # factory.name = support.null-audio-sink - # node.name = "my-mic" - # node.description = "Microphone" - # media.class = "Audio/Source/Virtual" - # audio.position = "FL,FR" - # } - #} - - # This creates a single PCM source device for the given - # alsa device path hw:0. You can change source to sink - # to make a sink in the same way. - #{ factory = adapter - # args = { - # factory.name = api.alsa.pcm.source - # node.name = "alsa-source" - # node.description = "PCM Source" - # media.class = "Audio/Source" - # api.alsa.path = "hw:0" - # api.alsa.period-size = 1024 - # api.alsa.headroom = 0 - # api.alsa.disable-mmap = false - # api.alsa.disable-batch = false - # audio.format = "S16LE" - # audio.rate = 48000 - # audio.channels = 2 - # audio.position = "FL,FR" - # } - #} -] - -context.exec = [ - #{ path = <program-name> [ args = "<arguments>" ] } - # - # Execute the given program with arguments. - # - # You can optionally start the session manager here, - # but it is better to start it as a systemd service. - # Run the session manager with -h for options. - # - #{ path = "/usr/bin/pipewire-media-session" args = "" } - # - # You can optionally start the pulseaudio-server here as well - # but it is better to start it as a systemd service. - # It can be interesting to start another daemon here that listens - # on another address with the -a option (eg. -a tcp:4713). - # - #{ path = "/usr/bin/pipewire" args = "-c pipewire-pulse.conf" } -] diff --git a/gemfeed/examples/conf/dotfiles/scripts/README.md b/gemfeed/examples/conf/dotfiles/scripts/README.md deleted file mode 100644 index ecbc8ec0..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Scripts installed to my ~/scripts - -Mostly quick-n-dirty ones! diff --git a/gemfeed/examples/conf/dotfiles/scripts/ai b/gemfeed/examples/conf/dotfiles/scripts/ai deleted file mode 100755 index abcf4909..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/ai +++ /dev/null @@ -1,7 +0,0 @@ -#!/usr/bin/env zsh - -if [ $(uname) = Darwin ]; then - exec hx.nvim-copilot-prompt "$@" -else - exec hx.hexai-prompt "$@" -fi diff --git a/gemfeed/examples/conf/dotfiles/scripts/brokenlinkfinder b/gemfeed/examples/conf/dotfiles/scripts/brokenlinkfinder deleted file mode 100644 index 7fe15765..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/brokenlinkfinder +++ /dev/null @@ -1,73 +0,0 @@ -#!/usr/bin/env ruby - -require 'net/http' -require 'uri' -require 'nokogiri' -require 'set' - -# Method to fetch and parse HTML from a URL -def fetch_html(url) - response = Net::HTTP.get_response(URI(url)) - response.body if response.is_a?(Net::HTTPSuccess) -rescue StandardError => e - puts "Error fetching #{url}: #{e.message}" - nil -end - -# Method to find and check links on a page -def check_links(url, domain) - html = fetch_html(url) - return unless html - - checked = Set.new - broken = Set.new - - document = Nokogiri::HTML(html) - links = document.css('a').map { |link| link['href'] }.compact - - internal_links = links.select do |link| - link.start_with?('/') || link.start_with?('./') || URI(link).host == domain - end - puts "Internal links: #{internal_links}" - - internal_links.uniq.each do |link| - full_url = link.start_with?('/') || link.start_with?('./') ? "#{url}#{link}" : link - full_url.sub!('./', '/') - next if checked.include?(full_url) - - broken << full_url unless check_link(full_url) - checked << full_url - end - - broken -end - -# Method to check if a link is broken -def check_link(url) - uri = URI(url) - response = Net::HTTP.get_response(uri) - - if response.is_a?(Net::HTTPSuccess) - puts "Working link: #{url}" - true - else - puts "Broken link: #{url} (HTTP #{response.code})" - false - end -rescue StandardError => e - puts "Error checking #{url}: #{e.message}" - false -end - -# Main program -if ARGV.length != 1 - puts 'Usage: ruby brokenlinkfinder.rb <URL>' - exit -end - -start_url = ARGV.first -domain = URI(start_url).host - -check_links(start_url, domain).each do |broken| - puts "Broken: #{broken}" -end diff --git a/gemfeed/examples/conf/dotfiles/scripts/gvim b/gemfeed/examples/conf/dotfiles/scripts/gvim deleted file mode 100755 index 5777a7ce..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/gvim +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -# Hack so qutebrowser starts an editor (Helix) in a new ghostty terminal. - -declare -r FILE_PATH="$2" -#echo "$@" > /tmp/params.txt - -ghostty -e "hx $FILE_PATH" diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.aichat-prompt b/gemfeed/examples/conf/dotfiles/scripts/hx.aichat-prompt deleted file mode 100755 index 4cafcf5d..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/hx.aichat-prompt +++ /dev/null @@ -1,9 +0,0 @@ -#!/usr/bin/env zsh - -declare -xr INSTRUCTIONS='Answer only. If it is code, code only without code-block at the beginning and the end.' - -if [[ $# -eq 0 ]]; then - aichat "$(hx.prompt). $INSTRUCTIONS" -else - aichat "$@. $INSTRUCTIONS" -fi diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.chatgpt-prompt b/gemfeed/examples/conf/dotfiles/scripts/hx.chatgpt-prompt deleted file mode 100755 index e4b6047f..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/hx.chatgpt-prompt +++ /dev/null @@ -1,3 +0,0 @@ -#!/usr/bin/env zsh - -chatgpt "$(hx.prompt). Answer only. If it is code, code only without code-block at the beginning and the end." diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.goformatter b/gemfeed/examples/conf/dotfiles/scripts/hx.goformatter deleted file mode 100755 index 028fbb25..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/hx.goformatter +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/sh - -goimports | gofumpt diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.hexai-prompt b/gemfeed/examples/conf/dotfiles/scripts/hx.hexai-prompt deleted file mode 100755 index ef413c0a..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/hx.hexai-prompt +++ /dev/null @@ -1,9 +0,0 @@ -#!/usr/bin/env zsh - -declare -xr INSTRUCTIONS='Answer only. If it is code, code only without code-block at the beginning and the end.' - -if [[ $# -eq 0 ]]; then - hexai "$(hx.prompt). $INSTRUCTIONS" 2>/dev/null -else - hexai "$@. $INSTRUCTIONS" 2>/dev/null -fi diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.nvim-copilot-prompt b/gemfeed/examples/conf/dotfiles/scripts/hx.nvim-copilot-prompt deleted file mode 100755 index dcb28376..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/hx.nvim-copilot-prompt +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env zsh - -declare -r STDIN_FILE=~/.copilot_prompt_stdin.txt -declare -r INPUT_FILE=~/.copilot_chat_input.txt -declare -r OUTPUT_FILE=~/.copilot_chat_output.txt -declare INPUT_PROMPT - -if [ -f $OUTPUT_FILE.done ]; then - rm $OUTPUT_FILE.done -fi -cat > $STDIN_FILE &>/dev/null - -if [ $# -eq 0 ]; then - INPUT_PROMPT="$(hx.prompt)" -else - INPUT_PROMPT="$@" -fi - -cat <<INPUT_FILE > $INPUT_FILE -$INPUT_PROMPT for the following: - -$(cat $STDIN_FILE) - -If the result is code, print out the code only, don't print the \`\`\`-markers around the code block. -INPUT_FILE - -tmux split-window -v "nvim +':CopilotAsk'; mv $OUTPUT_FILE $OUTPUT_FILE.done" - -while [ ! -f "$OUTPUT_FILE.done" ]; do - sleep 0.2 -done -sed -n '/^## Copilot/,/^## User/ { /^## Copilot/d; /\[file:/d; /^## User/d; p; }' $OUTPUT_FILE.done diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.prompt b/gemfeed/examples/conf/dotfiles/scripts/hx.prompt deleted file mode 100755 index 8dd14dd3..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/hx.prompt +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env zsh - -declare -r REPLY_FILE=~/.hx-prompt-reply -if [ -f "$REPLY_FILE" ]; then - rm "$REPLY_FILE" -fi - -tmux split-window -v "touch $REPLY_FILE.tmp; hx $REPLY_FILE.tmp; mv $REPLY_FILE.tmp $REPLY_FILE" - -while [ ! -f "$REPLY_FILE" ]; do - sleep 0.2 -done - -cat "$REPLY_FILE" diff --git a/gemfeed/examples/conf/dotfiles/scripts/randomnote.rb b/gemfeed/examples/conf/dotfiles/scripts/randomnote.rb deleted file mode 100644 index b0c1b490..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/randomnote.rb +++ /dev/null @@ -1,30 +0,0 @@ -#!/usr/bin/env ruby - -NOTES_DIR = "#{ENV['HOME']}/git/foo.zone-content/gemtext/notes" -BOOK_PATH = "#{ENV['HOME']}/Buecher/Diverse/Search-Inside-Yourself.txt" -MIN_PERCENTAGE = 80 -MIN_LENGTH = 10 - -class String - CLEAN_PATTERN = [ - /\d\d\d-\d\d-\d\d/, /[^A-Za-z0-9!.;,?'" @]/, - /http.?:\/\/\S+/, /\S+\.gmi/, /^\./, /^\d/, - ] - def clean - CLEAN_PATTERN.each {|p| gsub! p, '' } - gsub(/\s+/, ' ').strip - end - def letter_percentage?(threshold) = threshold <= (100 * count("A-Za-z")) / length -end - -begin - srand Random.new_seed - puts File.read((Dir["#{NOTES_DIR}/*.gmi"] + [BOOK_PATH]).shuffle.sample) - .split("\n") - .map(&:clean) - .select{ |l| l.length >= MIN_LENGTH } - .reject{ |l| l.match?(/(Published at|EMail your comments)/) } - .reject{ |l| l.match?(/'|book notes/) } - .select{ |l| l.letter_percentage?(MIN_PERCENTAGE) } - .shuffle.sample -end diff --git a/gemfeed/examples/conf/dotfiles/scripts/taskwarriorfeeder.rb b/gemfeed/examples/conf/dotfiles/scripts/taskwarriorfeeder.rb deleted file mode 100644 index 8e3096ea..00000000 --- a/gemfeed/examples/conf/dotfiles/scripts/taskwarriorfeeder.rb +++ /dev/null @@ -1,221 +0,0 @@ -#!/usr/bin/env ruby - -require 'optparse' -require 'digest' -require 'json' -require 'set' - -PERSONAL_TIMESPAN_D = 30 -WORK_TIMESPAN_D = 14 -WORKTIME_DIR = "#{ENV['HOME']}/git/worktime".freeze -GOS_DIR = "#{ENV['HOME']}/.gosdir".freeze -MAX_PENDING_RANDOM_TASKS = 11 - -def maybe? - [true, false].sample -end - -def run_from_personal_device? - `uname`.chomp == 'Linux' -end - -def random_count - MAX_PENDING_RANDOM_TASKS - `task status:pending +random count`.to_i -end - -def notes(notes_dirs, prefix, dry) - notes_dirs.each do |notes_dir| - Dir["#{notes_dir}/#{prefix}-*"].each do |notes_file| - match = File.read(notes_file).strip.match(/(?<due>\d+)? *(?<tag>[A-Z]?[a-z,-:]+) *(?<body>.*)/m) - next unless match - - tags = match[:tag].split(',') + [prefix] - due = if match[:due].nil? - tags.include?('track') ? '1year' : "#{rand(0..PERSONAL_TIMESPAN_D)}d" - else - "#{match[:due]}d" - end - yield tags, match[:body], due - File.delete(notes_file) unless dry - end - end -end - -def random_quote(md_file) - tag = File.basename(md_file, '.md').downcase - lines = File.readlines(md_file) - - match = lines.first.match(/\((\d+)\)/) - timespan = run_from_personal_device? ? PERSONAL_TIMESPAN_D : WORK_TIMESPAN_D - timespan = match ? match[1].to_i : timespan - - quote = lines.select { |l| l.start_with? '*' }.map { |l| l.sub(/\* +/, '') }.sample - tags = [tag, 'random'] - tags << 'work' if maybe? and maybe? - yield tags, quote.chomp, "#{rand(0..timespan)}d" -end - -def run!(cmd, dry) - puts cmd - return if dry - - puts `#{cmd}` - raise "Command '#{cmd}' failed with #{$?.exitstatus}" if $?.exitstatus != 0 -rescue StandardError => e - puts "Error running command '#{cmd}': #{e.message}" - exit 1 -end - -def skill_add!(skills_str, dry) - skills_file = "#{WORKTIME_DIR}/skills.txt" - skills_str.split(',').map(&:strip).each { skills[_1.to_s.downcase] = _1 } - - File.foreach(skills_file) do |line| - line.chomp! - skills[line.downcase] = line - end - File.open("#{skills_file}.tmp", 'w') do |file| - skills.each_value { |skill| file.puts(skill) } - end - return if dry - - File.rename("#{skills_file}.tmp", skills_file) -end - -def worklog_add!(tag, quote, due, dry) - file = "#{WORKTIME_DIR}/wl-#{Time.now.to_i}n.txt" - content = "#{due.chomp 'd'} #{tag} #{quote}" - - puts "#{file}: #{content}" - File.write(file, content) unless dry -end - -# Queue to Gos https://codeberg.org/snonux/gos -def gos_queue!(tags, message, dry) - tags.delete('share') - platforms = [] - %w[linkedin li mastodon ma noop no].select { tags.include?(_1) }.each do |platform| - platforms << platform - tags.delete(platform) - end - unless platforms.empty? - platforms = %w[share] + platforms - tags = ["#{platforms.join(':')}"] + tags - end - tags = %w[share] + tags if tags.size == 1 && !tags.first.start_with?('share') - tags_str = tags.join(',') - - message = "#{tags_str.empty? ? '' : "#{tags_str} "}#{message}" - file = "#{GOS_DIR}/#{Digest::MD5.hexdigest(message)}.txt" - puts "Writing #{file} with #{message}" - File.write(file, message) unless dry -end - -def task_add!(tags, quote, due, dry) - if quote.empty? - puts 'Not adding task with empty quote' - return - end - if tags.include?('tr') - tags << 'track' - tags.delete('tr') - end - tags << 'work' if tags.include?('mentoring') || tags.include?('productivity') - tags.uniq! - - if tags.include?('task') - run! "task #{quote}", dry - else - project = tags.find { |t| t =~ /^[A-Z]/ } - project = if project.nil? - '' - else - tags.delete(project) - " project:#{project.downcase}" - end - priority = tags.include?('high') ? 'H' : '' - run! "task add due:#{due} priority:#{priority}#{project} +#{tags.join(' +')} '#{quote.gsub("'", '"')}'", dry - end -end - -def task_schedule!(id, due, dry) - run! "timeout 5s task modify #{id} due:#{due}", dry -end - -# Randomly schedule all unscheduled tasks but the ones with the +unsched tag -def unscheduled_tasks - lines = `task -lowhigh -unsched -nosched -notes -note -meeting -track due: 2>/dev/null`.split("\n").drop(1) - lines.pop - lines.map { |foo| foo.split.first }.each do |id| - yield id if id.to_i.positive? - end -end - -begin - opts = { - quotes_dir: "#{ENV['HOME']}/Notes/HabitsAndQuotes", - notes_dirs: "#{ENV['HOME']}/Notes,#{ENV['HOME']}/Notes/Quicklogger,#{ENV['HOME']}/git/worktime", - dry_run: false, - no_random: false - } - - opt_parser = OptionParser.new do |o| - o.banner = 'Usage: ruby taskwarriorfeeder.rb [options]' - o.on('-d', '--quotes-dir DIR', 'The quotes directory') { |v| opts[:quotes_dir] = v } - o.on('-n', '--notes-dirs DIR1,DIR2,...', 'The notes directories') { |v| opts[:notes_dirs] = v } - o.on('-D', '--dry-run', 'Dry run mode') { opts[:dry_run] = true } - o.on('-R', '--no-randoms', 'No random entries') { opts[:no_random] = true } - o.on_tail('-h', '--help', 'Show this help message and exit') { puts o and exit } - end - - opt_parser.parse!(ARGV) - core_habits_md_file = "#{opts[:quotes_dir]}/CoreHabits.md" - - (run_from_personal_device? ? %w[ql pl] : %w[wl]).each do |prefix| - notes(opts[:notes_dirs].split(','), prefix, opts[:dry_run]) do |tags, note, due| - if tags.include?('skill') || tags.include?('skills') - skill_add!(note, opts[:dry_run]) - elsif tags.include? 'work' - worklog_add!(:log, note, due, opts[:dry_run]) - elsif tags.any? { |tag| tag.start_with?('share') } - gos_queue!(tags, note, opts[:dry_run]) - else - task_add!(tags, note, due, opts[:dry_run]) - end - end - end - - unless opts[:no_random] - if File.exist?(core_habits_md_file) - random_quote(core_habits_md_file) do |tags, quote, due| - task_add!(tags, quote, due, opts[:dry_run]) - end - end - count = random_count - - Dir["#{opts[:quotes_dir]}/*.md"].shuffle.each do |md_file| - next unless maybe? - break if count <= 0 - - random_quote(md_file) do |tags, quote, due| - task_add!(tags, quote, due, opts[:dry_run]) - count -= 1 - end - end - end - - if Dir.exist?(GOS_DIR) && !opts[:dry_run] - Dir["#{WORKTIME_DIR}/tw-gos-*.json"].each do |tw_gos| - JSON.parse(File.read(tw_gos)).each do |entry| - gos_queue!(entry['tags'], entry['description'], opts[:dry_run]) - end - File.delete(tw_gos) - rescue StandardError => e - puts e - end - end - - unscheduled_tasks do |id| - task_schedule!(id, "#{rand(0..PERSONAL_TIMESPAN_D)}d", opts[:dry_run]) - end -end diff --git a/gemfeed/examples/conf/dotfiles/signature b/gemfeed/examples/conf/dotfiles/signature deleted file mode 100644 index 8031719e..00000000 --- a/gemfeed/examples/conf/dotfiles/signature +++ /dev/null @@ -1,2 +0,0 @@ -Paul Buetow -paul.buetow.org diff --git a/gemfeed/examples/conf/dotfiles/ssh/config b/gemfeed/examples/conf/dotfiles/ssh/config deleted file mode 100644 index 5b4b250e..00000000 --- a/gemfeed/examples/conf/dotfiles/ssh/config +++ /dev/null @@ -1,21 +0,0 @@ -ControlPath ~/.ssh/cp-%C -ControlMaster auto -#UseKeychain yes -AddKeysToAgent yes -ControlPersist 60m -#StrictHostKeyChecking no - -Host blowfish.buetow.org -User rex -Port 2 - -Host fishfinger.buetow.org -User rex -Port 2 - -Host *.aws.buetow.org -User ec2-user -Port 22 - -Host *.buetow.org -Port 2 diff --git a/gemfeed/examples/conf/dotfiles/sway/config.d/keyboard.conf b/gemfeed/examples/conf/dotfiles/sway/config.d/keyboard.conf deleted file mode 100644 index 6b10a788..00000000 --- a/gemfeed/examples/conf/dotfiles/sway/config.d/keyboard.conf +++ /dev/null @@ -1,6 +0,0 @@ -input "type:keyboard" { - xkb_layout us,gb,de - xkb_options grp:win_space_toggle -} - -input * xkb_options "caps:escape" diff --git a/gemfeed/examples/conf/dotfiles/tmux/tmux.conf b/gemfeed/examples/conf/dotfiles/tmux/tmux.conf deleted file mode 100644 index 42c53866..00000000 --- a/gemfeed/examples/conf/dotfiles/tmux/tmux.conf +++ /dev/null @@ -1,32 +0,0 @@ -source ~/.config/tmux/tmux.local.conf - -set-option -g allow-rename off -set-option -g history-limit 100000 -set-option -s escape-time 0 -set-option -g set-titles on - -set-window-option -g mode-keys vi - -bind-key h select-pane -L -bind-key j select-pane -D -bind-key k select-pane -U -bind-key l select-pane -R - -bind-key H resize-pane -L 5 -bind-key J resize-pane -D 5 -bind-key K resize-pane -U 5 -bind-key L resize-pane -R 5 - -bind-key b break-pane -d -bind-key c new-window -c '#{pane_current_path}' -bind-key F new-window -n "session-switcher" "tmux list-sessions | fzf | cut -d: -f1 | xargs tmux switch-client -t" -bind-key p setw synchronize-panes off -bind-key P setw synchronize-panes on -bind-key r source-file ~/.tmux.conf \; display-message "~/.tmux.conf reloaded" -bind-key T choose-tree - -set-option -g pane-active-border-style fg=magenta,bold - -set -g status-right '#{@hexai_status} #[fg=colour8]| %H:%M' -set -g status-right-length 120 -set-environment -g HEXAI_TMUX_STATUS_THEME white-on-purple diff --git a/gemfeed/examples/conf/dotfiles/tmux/tmux.local.conf b/gemfeed/examples/conf/dotfiles/tmux/tmux.local.conf deleted file mode 100644 index adb6294b..00000000 --- a/gemfeed/examples/conf/dotfiles/tmux/tmux.local.conf +++ /dev/null @@ -1,2 +0,0 @@ -bind-key -T copy-mode-vi 'v' send -X begin-selection -bind-key -T copy-mode-vi 'y' send -X copy-selection-and-cancel diff --git a/gemfeed/examples/conf/dotfiles/vale.ini b/gemfeed/examples/conf/dotfiles/vale.ini deleted file mode 100644 index 3b396788..00000000 --- a/gemfeed/examples/conf/dotfiles/vale.ini +++ /dev/null @@ -1,6 +0,0 @@ -StylesPath = styles -MinAlertLevel = suggestion -Packages = Microsoft, proselint - -[*] -BasedOnStyles = Vale, Microsoft, proselint diff --git a/gemfeed/examples/conf/dotfiles/waybar/config.jsonc b/gemfeed/examples/conf/dotfiles/waybar/config.jsonc deleted file mode 100644 index db2aeea6..00000000 --- a/gemfeed/examples/conf/dotfiles/waybar/config.jsonc +++ /dev/null @@ -1,194 +0,0 @@ -// -*- mode: jsonc -*- -{ - // "layer": "top", // Waybar at top layer - // "position": "bottom", // Waybar position (top|bottom|left|right) - "height": 20, // Waybar height (to be removed for auto height) - // "width": 1280, // Waybar width - "spacing": 1, // Gaps between modules (4px) - // Choose the order of the modules - "modules-left": [ - "sway/workspaces", - "sway/mode", - "sway/scratchpad" - ], - "modules-center": [ - ], - "modules-right": [ - "idle_inhibitor", - "pulseaudio", - "network", - "power-profiles-daemon", - "temperature", - "sway/language", - "battery", - "clock", - "tray" - ], - // Modules configuration - // "sway/workspaces": { - // "disable-scroll": true, - // "all-outputs": true, - // "warp-on-scroll": false, - // "format": "{name}: {icon}", - // "format-icons": { - // "1": "", - // "2": "", - // "3": "", - // "4": "", - // "5": "", - // "urgent": "", - // "focused": "", - // "default": "" - // } - // }, - "keyboard-state": { - "numlock": true, - "capslock": true, - "format": "{name} {icon}", - "format-icons": { - "locked": "", - "unlocked": "" - } - }, - "sway/mode": { - "format": "<span style=\"italic\">{}</span>" - }, - "sway/scratchpad": { - "format": "{icon} {count}", - "show-empty": false, - "format-icons": ["", ""], - "tooltip": true, - "tooltip-format": "{app}: {title}" - }, - "mpd": { - "format": "{stateIcon} {consumeIcon}{randomIcon}{repeatIcon}{singleIcon}{artist} - {album} - {title} ({elapsedTime:%M:%S}/{totalTime:%M:%S}) ⸨{songPosition}|{queueLength}⸩ {volume}% ", - "format-disconnected": "Disconnected ", - "format-stopped": "{consumeIcon}{randomIcon}{repeatIcon}{singleIcon}Stopped ", - "unknown-tag": "N/A", - "interval": 5, - "consume-icons": { - "on": " " - }, - "random-icons": { - "off": "<span color=\"#f53c3c\"></span> ", - "on": " " - }, - "repeat-icons": { - "on": " " - }, - "single-icons": { - "on": "1 " - }, - "state-icons": { - "paused": "", - "playing": "" - }, - "tooltip-format": "MPD (connected)", - "tooltip-format-disconnected": "MPD (disconnected)" - }, - "idle_inhibitor": { - "format": "{icon}", - "format-icons": { - "activated": "", - "deactivated": "" - } - }, - "tray": { - // "icon-size": 21, - "spacing": 10 - }, - "clock": { - // "timezone": "America/New_York", - "tooltip-format": "<big>{:%Y %B}</big>\n<tt><small>{calendar}</small></tt>", - "format-alt": "{:%Y-%m-%d}" - }, - "cpu": { - "format": "{usage}% ", - "tooltip": false - }, - "memory": { - "format": "{}% " - }, - "temperature": { - // "thermal-zone": 2, - // "hwmon-path": "/sys/class/hwmon/hwmon2/temp1_input", - "critical-threshold": 80, - // "format-critical": "{temperatureC}°C {icon}", - "format": "{temperatureC}°C {icon}", - "format-icons": ["", "", ""] - }, - "backlight": { - // "device": "acpi_video1", - "format": "{percent}% {icon}", - "format-icons": ["🌑", "🌘", "🌗", "🌖", "🌕"] - }, - "battery": { - "states": { - // "good": 95, - "warning": 30, - "critical": 15 - }, - "format": "{capacity}% {icon}", - "format-full": "{capacity}% {icon}", - "format-charging": "{capacity}% ", - "format-plugged": "{capacity}% ", - "format-alt": "{time} {icon}", - // "format-good": "", // An empty format will hide the module - // "format-full": "", - "format-icons": ["", "", "", "", ""] - }, - "battery#bat2": { - "bat": "BAT2" - }, - "power-profiles-daemon": { - "format": "{icon}", - "tooltip-format": "Power profile: {profile}\nDriver: {driver}", - "tooltip": true, - "format-icons": { - "default": "", - "performance": "", - "balanced": "", - "power-saver": "" - } - }, - "network": { - // "interface": "wlp2*", // (Optional) To force the use of this interface - "format-wifi": "{essid} ({signalStrength}%) ", - "format-ethernet": "{ipaddr}/{cidr} ", - "tooltip-format": "{ifname} via {gwaddr} ", - "format-linked": "{ifname} (No IP) ", - "format-disconnected": "Disconnected ⚠", - "format-alt": "{ifname}: {ipaddr}/{cidr}" - }, - "pulseaudio": { - // "scroll-step": 1, // %, can be a float - "format": "{volume}% {icon} {format_source}", - "format-bluetooth": "{volume}% {icon} {format_source}", - "format-bluetooth-muted": " {icon} {format_source}", - "format-muted": " {format_source}", - "format-source": "{volume}% ", - "format-source-muted": "", - "format-icons": { - "headphone": "", - "hands-free": "", - "headset": "", - "phone": "", - "portable": "", - "car": "", - "default": ["", "", ""] - }, - "on-click": "pavucontrol" - }, - "custom/media": { - "format": "{icon} {}", - "return-type": "json", - "max-length": 40, - "format-icons": { - "spotify": "", - "default": "🎜" - }, - "escape": true, - "exec": "$HOME/.config/waybar/mediaplayer.py 2> /dev/null" // Script in resources folder - // "exec": "$HOME/.config/waybar/mediaplayer.py --player spotify 2> /dev/null" // Filter player based on name - } -} diff --git a/gemfeed/examples/conf/dotfiles/waybar/style.css b/gemfeed/examples/conf/dotfiles/waybar/style.css deleted file mode 100644 index e0310372..00000000 --- a/gemfeed/examples/conf/dotfiles/waybar/style.css +++ /dev/null @@ -1,326 +0,0 @@ -* { - font-family: 'Noto Sans Mono', 'Font Awesome 6 Free', 'Font Awesome 6 Brands', monospace; - font-size: 13px; -} - -window#waybar { - background-color: rgba(43, 48, 59, 0.5); - border-bottom: 3px solid rgba(100, 114, 125, 0.5); - color: #ffffff; - transition-property: background-color; - transition-duration: .5s; -} - -window#waybar.hidden { - opacity: 0.2; -} - -/* -window#waybar.empty { - background-color: transparent; -} -window#waybar.solo { - background-color: #FFFFFF; -} -*/ - -window#waybar.termite { - background-color: #3F3F3F; -} - -window#waybar.chromium { - background-color: #000000; - border: none; -} - -button { - /* Use box-shadow instead of border so the text isn't offset */ - box-shadow: inset 0 -3px transparent; - /* Avoid rounded borders under each button name */ - border: none; - border-radius: 0; -} - -/* https://github.com/Alexays/Waybar/wiki/FAQ#the-workspace-buttons-have-a-strange-hover-effect */ -button:hover { - background: inherit; - box-shadow: inset 0 -3px #ffffff; -} - -/* you can set a style on hover for any module like this */ -#pulseaudio:hover { - background-color: #a37800; -} - -#workspaces button { - padding: 0 5px; - background-color: transparent; - color: #ffffff; -} - -#workspaces button:hover { - background: rgba(0, 0, 0, 0.2); -} - -#workspaces button.focused { - background-color: #64727D; - box-shadow: inset 0 -3px #ffffff; -} - -#workspaces button.urgent { - background-color: #eb4d4b; -} - -#mode { - background-color: #64727D; - box-shadow: inset 0 -3px #ffffff; -} - -#clock, -#battery, -#cpu, -#memory, -#disk, -#temperature, -#backlight, -#network, -#pulseaudio, -#wireplumber, -#custom-media, -#tray, -#mode, -#idle_inhibitor, -#scratchpad, -#power-profiles-daemon, -#mpd { - padding: 0 10px; - color: #ffffff; -} - -#window, -#workspaces { - margin: 0 4px; -} - -/* If workspaces is the leftmost module, omit left margin */ -.modules-left > widget:first-child > #workspaces { - margin-left: 0; -} - -/* If workspaces is the rightmost module, omit right margin */ -.modules-right > widget:last-child > #workspaces { - margin-right: 0; -} - -#clock { - background-color: #64727D; -} - -#battery { - background-color: #ffffff; - color: #000000; -} - -#battery.charging, #battery.plugged { - color: #ffffff; - background-color: #26A65B; -} - -@keyframes blink { - to { - background-color: #ffffff; - color: #000000; - } -} - -/* Using steps() instead of linear as a timing function to limit cpu usage */ -#battery.critical:not(.charging) { - background-color: #f53c3c; - color: #ffffff; - animation-name: blink; - animation-duration: 0.5s; - animation-timing-function: steps(12); - animation-iteration-count: infinite; - animation-direction: alternate; -} - -#power-profiles-daemon { - padding-right: 15px; -} - -#power-profiles-daemon.performance { - background-color: #f53c3c; - color: #ffffff; -} - -#power-profiles-daemon.balanced { - background-color: #2980b9; - color: #ffffff; -} - -#power-profiles-daemon.power-saver { - background-color: #2ecc71; - color: #000000; -} - -label:focus { - background-color: #000000; -} - -#cpu { - background-color: #2ecc71; - color: #000000; -} - -#memory { - background-color: #9b59b6; -} - -#disk { - background-color: #964B00; -} - -#backlight { - background-color: #90b1b1; -} - -#network { - background-color: #2980b9; -} - -#network.disconnected { - background-color: #f53c3c; -} - -#pulseaudio { - background-color: #f1c40f; - color: #000000; -} - -#pulseaudio.muted { - background-color: #90b1b1; - color: #2a5c45; -} - -#wireplumber { - background-color: #fff0f5; - color: #000000; -} - -#wireplumber.muted { - background-color: #f53c3c; -} - -#custom-media { - background-color: #66cc99; - color: #2a5c45; - min-width: 100px; -} - -#custom-media.custom-spotify { - background-color: #66cc99; -} - -#custom-media.custom-vlc { - background-color: #ffa000; -} - -#temperature { - background-color: #f0932b; -} - -#temperature.critical { - background-color: #eb4d4b; -} - -#tray { - background-color: #2980b9; -} - -#tray > .passive { - -gtk-icon-effect: dim; -} - -#tray > .needs-attention { - -gtk-icon-effect: highlight; - background-color: #eb4d4b; -} - -#idle_inhibitor { - background-color: #2d3436; -} - -#idle_inhibitor.activated { - background-color: #ecf0f1; - color: #2d3436; -} - -#mpd { - background-color: #66cc99; - color: #2a5c45; -} - -#mpd.disconnected { - background-color: #f53c3c; -} - -#mpd.stopped { - background-color: #90b1b1; -} - -#mpd.paused { - background-color: #51a37a; -} - -#language { - background: #00b093; - color: #740864; - padding: 0 5px; - margin: 0 5px; - min-width: 16px; -} - -#keyboard-state { - background: #97e1ad; - color: #000000; - padding: 0 0px; - margin: 0 5px; - min-width: 16px; -} - -#keyboard-state > label { - padding: 0 5px; -} - -#keyboard-state > label.locked { - background: rgba(0, 0, 0, 0.2); -} - -#scratchpad { - background: rgba(0, 0, 0, 0.2); -} - -#scratchpad.empty { - background-color: transparent; -} - -#privacy { - padding: 0; -} - -#privacy-item { - padding: 0 5px; - color: white; -} - -#privacy-item.screenshare { - background-color: #cf5700; -} - -#privacy-item.audio-in { - background-color: #1ca000; -} - -#privacy-item.audio-out { - background-color: #0069d4; -} diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/Justfile b/gemfeed/examples/conf/f3s/anki-sync-server/Justfile deleted file mode 100644 index 73d679c7..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "services" -RELEASE_NAME := "anki-sync-server" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/README.md b/gemfeed/examples/conf/f3s/anki-sync-server/README.md deleted file mode 100644 index e3aee076..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/README.md +++ /dev/null @@ -1,34 +0,0 @@ - -# Anki Sync Server Kubernetes Deployment - -This directory contains the Kubernetes configuration for deploying the Anki Sync Server. - -## Deployment - -To deploy the Anki Sync Server, apply the Kubernetes manifests in this directory: - -```bash -make apply -``` - -## Secret Management - -The deployment uses a Kubernetes secret to manage the `SYNC_USER1` environment variable. This secret is not included in the repository for security reasons. You must create it manually in the `services` namespace. - -### Creating the Secret - -To create the secret, use the following `kubectl` command: - -```bash -kubectl create secret generic anki-sync-server-secret --from-literal=SYNC_USER1='paul:SECRETPASSWORD' -n services -``` - -Replace `paul:SECRETPASSWORD` with your desired username and password. - -### Updating the Secret - -To update the secret, you can delete and recreate it, or use `kubectl edit`: - -```bash -kubectl edit secret anki-sync-server-secret -n services -``` diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Dockerfile b/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Dockerfile deleted file mode 100644 index 81fad856..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Dockerfile +++ /dev/null @@ -1,39 +0,0 @@ -FROM rust:1.85.0-alpine3.20 AS builder - -ARG ANKI_VERSION - -RUN apk update && apk add --no-cache build-base protobuf && rm -rf /var/cache/apk/* - -RUN cargo install --git https://github.com/ankitects/anki.git \ ---tag ${ANKI_VERSION} \ ---root /anki-server \ ---locked \ -anki-sync-server - -FROM alpine:3.21.0 - -# Default PUID and PGID values (can be overridden at runtime). Use these to -# ensure the files on the volume have the permissions you need. -ENV PUID=1000 -ENV PGID=1000 - -COPY --from=builder /anki-server/bin/anki-sync-server /usr/local/bin/anki-sync-server - -RUN apk update && apk add --no-cache bash su-exec && rm -rf /var/cache/apk/* - -EXPOSE 8080 - -COPY entrypoint.sh /entrypoint.sh -RUN chmod +x /entrypoint.sh - -ENTRYPOINT ["/entrypoint.sh"] -CMD ["anki-sync-server"] - -# This health check will work for Anki versions 24.08.x and newer. -# For older versions, it may incorrectly report an unhealthy status, which should not be the case. -HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ - CMD wget -qO- http://127.0.0.1:8080/health || exit 1 - -VOLUME /anki_data - -LABEL maintainer="Jean Khawand <jk@jeankhawand.com>" diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Justfile b/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Justfile deleted file mode 100644 index 5da854f3..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Justfile +++ /dev/null @@ -1,6 +0,0 @@ -all: - docker build -t anki-sync-server:25.07.5b --build-arg ANKI_VERSION=25.07.5 . -f3s: - docker build -t anki-sync-server:25.07.5b --build-arg ANKI_VERSION=25.07.5 . - docker tag anki-sync-server:25.07.5b r0.lan.buetow.org:30001/anki-sync-server:25.07.5b - docker push r0.lan.buetow.org:30001/anki-sync-server:25.07.5b diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/entrypoint.sh b/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/entrypoint.sh deleted file mode 100644 index 9a72cca3..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/entrypoint.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/sh -set -o errexit -set -o nounset -set -o pipefail - -# Default PUID and PGID if not provided -export PUID=${PUID:-1000} -export PGID=${PGID:-1000} - -# These values are fixed and cannot be overwritten from the outside for -# convenience and safety reasons -export SYNC_PORT=8080 -export SYNC_BASE=/anki_data - -# Check if group exists, create if not -if ! getent group anki-group > /dev/null 2>&1; then - addgroup -g "$PGID" anki-group -fi - -# Check if user exists, create if not -if ! id -u anki > /dev/null 2>&1; then - adduser -D -H -u "$PUID" -G anki-group anki -fi - -# Fix ownership of mounted volumes -mkdir -p /anki_data -#chown anki:anki-group /anki_data - -# Run the provided command as the `anki` user -exec su-exec anki "$@" - diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/Chart.yaml deleted file mode 100644 index 632f09ae..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/Chart.yaml +++ /dev/null @@ -1,5 +0,0 @@ -apiVersion: v2 -name: anki-sync-server -description: A Helm chart for deploying the Anki Sync Server. -version: 0.1.0 -appVersion: "25.07.5b" diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/README.md b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/README.md deleted file mode 100644 index 1b485be9..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# Anki Sync Server Helm Chart - -This chart deploys the Anki Sync Server. - -## Installing the Chart - -To install the chart with the release name `my-release`, run the following command: - -```bash -helm install anki-sync-server . --namespace services --create-namespace -``` diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/deployment.yaml deleted file mode 100644 index 181b6c97..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/deployment.yaml +++ /dev/null @@ -1,35 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: anki-sync-server - namespace: services -spec: - replicas: 1 - selector: - matchLabels: - app: anki-sync-server - template: - metadata: - labels: - app: anki-sync-server - spec: - containers: - - name: anki-sync-server - image: registry.lan.buetow.org:30001/anki-sync-server:25.07.5b - ports: - - containerPort: 8080 - env: - - name: SYNC_PORT - value: "8080" - - name: SYNC_USER1 - valueFrom: - secretKeyRef: - name: anki-sync-server-secret - key: SYNC_USER1 - volumeMounts: - - name: anki-data - mountPath: /anki_data - volumes: - - name: anki-data - persistentVolumeClaim: - claimName: anki-data-pvc diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/ingress.yaml deleted file mode 100644 index 010c5884..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/ingress.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: anki-sync-server-ingress - namespace: services - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: anki.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: anki-sync-server-service - port: - number: 8080 diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/persistent-volume.yaml b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/persistent-volume.yaml deleted file mode 100644 index da715ea2..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/persistent-volume.yaml +++ /dev/null @@ -1,27 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: anki-data-pv -spec: - capacity: - storage: 10Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/anki-sync-server/anki_data - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: anki-data-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/service.yaml b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/service.yaml deleted file mode 100644 index a8eb183e..00000000 --- a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/service.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - app: anki-sync-server - name: anki-sync-server-service - namespace: services -spec: - ports: - - name: web - port: 8080 - protocol: TCP - targetPort: 8080 - selector: - app: anki-sync-server diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/Justfile b/gemfeed/examples/conf/f3s/audiobookshelf/Justfile deleted file mode 100644 index bc020beb..00000000 --- a/gemfeed/examples/conf/f3s/audiobookshelf/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "services" -RELEASE_NAME := "audiobookshelf" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/Chart.yaml deleted file mode 100644 index dbd55e07..00000000 --- a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/Chart.yaml +++ /dev/null @@ -1,5 +0,0 @@ -apiVersion: v2 -name: audiobookshelf -description: A Helm chart for deploying Audiobookshelf. -version: 0.1.0 -appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/README.md b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/README.md deleted file mode 100644 index 670efa09..00000000 --- a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/README.md +++ /dev/null @@ -1,19 +0,0 @@ -# Audiobookshelf Helm Chart - -This chart deploys Audiobookshelf. - -## Prerequisites - -Before installing the chart, you must manually create the following directories on your host system to be used by the persistent volumes: - -- `/data/nfs/k3svolumes/audiobookshelf/config` -- `/data/nfs/k3svolumes/audiobookshelf/audiobooks` -- `/data/nfs/k3svolumes/audiobookshelf/podcasts` - -## Installing the Chart - -To install the chart with the release name `my-release`, run the following command: - -```bash -helm install audiobookshelf . --namespace services --create-namespace -``` diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/deployment.yaml deleted file mode 100644 index 65e536ab..00000000 --- a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/deployment.yaml +++ /dev/null @@ -1,53 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: audiobookshelf - namespace: services -spec: - replicas: 1 - selector: - matchLabels: - app: audiobookshelf - template: - metadata: - labels: - app: audiobookshelf - spec: - containers: - - name: audiobookshelf - image: ghcr.io/advplyr/audiobookshelf - ports: - - containerPort: 80 - volumeMounts: - - name: audiobookshelf-config - mountPath: /config - - name: audiobookshelf-audiobooks - mountPath: /audiobooks - - name: audiobookshelf-podcasts - mountPath: /podcasts - volumes: - - name: audiobookshelf-config - persistentVolumeClaim: - claimName: audiobookshelf-config-pvc - - name: audiobookshelf-audiobooks - persistentVolumeClaim: - claimName: audiobookshelf-audiobooks-pvc - - name: audiobookshelf-podcasts - persistentVolumeClaim: - claimName: audiobookshelf-podcasts-pvc ---- -apiVersion: v1 -kind: Service -metadata: - labels: - app: audiobookshelf - name: audiobookshelf-service - namespace: services -spec: - ports: - - name: web - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: audiobookshelf diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/ingress.yaml deleted file mode 100644 index 6e4f7ac7..00000000 --- a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/ingress.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: audiobookshelf-ingress - namespace: services - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: audiobookshelf.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: audiobookshelf-service - port: - number: 80 diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/persistent-volumes.yaml deleted file mode 100644 index 8691d141..00000000 --- a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/persistent-volumes.yaml +++ /dev/null @@ -1,83 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: audiobookshelf-config-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/audiobookshelf/config - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: audiobookshelf-config-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: audiobookshelf-audiobooks-pv -spec: - capacity: - storage: 300Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/audiobookshelf/audiobooks - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: audiobookshelf-audiobooks-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 300Gi ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: audiobookshelf-podcasts-pv -spec: - capacity: - storage: 50Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/audiobookshelf/podcasts - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: audiobookshelf-podcasts-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 50Gi diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/Justfile b/gemfeed/examples/conf/f3s/example-apache-volume-claim/Justfile deleted file mode 100644 index e8003e8b..00000000 --- a/gemfeed/examples/conf/f3s/example-apache-volume-claim/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "test" -RELEASE_NAME := "example-apache-volume-claim" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/Chart.yaml deleted file mode 100644 index 78d53976..00000000 --- a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/Chart.yaml +++ /dev/null @@ -1,5 +0,0 @@ -apiVersion: v2 -name: apache-volume-claim -description: A Helm chart for deploying Apache with a persistent volume claim. -version: 0.1.0 -appVersion: "1.0" diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/README.md b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/README.md deleted file mode 100644 index 23d14cde..00000000 --- a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# Apache Helm Chart with Persistent Volume - -This chart deploys a simple Apache web server with a persistent volume claim. - -## Installing the Chart - -To install the chart with the release name `my-release`, run the following command: - -```bash -helm install example-apache-volume-claim . --namespace test --create-namespace -```
\ No newline at end of file diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-deployment.yaml b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-deployment.yaml deleted file mode 100644 index 78706a34..00000000 --- a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-deployment.yaml +++ /dev/null @@ -1,41 +0,0 @@ -# Apache HTTP Server Deployment -apiVersion: apps/v1 -kind: Deployment -metadata: - name: apache-deployment - namespace: test -spec: - replicas: 2 - selector: - matchLabels: - app: apache - template: - metadata: - labels: - app: apache - spec: - containers: - - name: apache - image: httpd:latest - ports: - # Container port where Apache listens - - containerPort: 80 - readinessProbe: - httpGet: - path: / - port: 80 - initialDelaySeconds: 5 - periodSeconds: 10 - livenessProbe: - httpGet: - path: / - port: 80 - initialDelaySeconds: 15 - periodSeconds: 10 - volumeMounts: - - name: apache-htdocs - mountPath: /usr/local/apache2/htdocs/ - volumes: - - name: apache-htdocs - persistentVolumeClaim: - claimName: example-apache-pvc diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-ingress.yaml b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-ingress.yaml deleted file mode 100644 index b26f95bd..00000000 --- a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-ingress.yaml +++ /dev/null @@ -1,41 +0,0 @@ -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: apache-ingress - namespace: test - namespace: test - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: 80 - - host: standby.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: 80 - - host: www.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: 80 diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-persistent-volume.yaml b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-persistent-volume.yaml deleted file mode 100644 index 7df28e6b..00000000 --- a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-persistent-volume.yaml +++ /dev/null @@ -1,27 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: example-apache-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/example-apache - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: example-apache-pvc - namespace: test -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-service.yaml b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-service.yaml deleted file mode 100644 index 1105e3a7..00000000 --- a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-service.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - app: apache - name: apache-service - namespace: test -spec: - ports: - - name: web - port: 80 - protocol: TCP - # Expose port 80 on the service - targetPort: 80 - selector: - # Link this service to pods with the label app=apache - app: apache diff --git a/gemfeed/examples/conf/f3s/example-apache/Justfile b/gemfeed/examples/conf/f3s/example-apache/Justfile deleted file mode 100644 index 579b9253..00000000 --- a/gemfeed/examples/conf/f3s/example-apache/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "test" -RELEASE_NAME := "example-apache" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/example-apache/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/example-apache/helm-chart/Chart.yaml deleted file mode 100644 index 6d496436..00000000 --- a/gemfeed/examples/conf/f3s/example-apache/helm-chart/Chart.yaml +++ /dev/null @@ -1,5 +0,0 @@ -apiVersion: v2 -name: apache -description: A Helm chart for deploying Apache -version: 0.1.0 -appVersion: "1.0" diff --git a/gemfeed/examples/conf/f3s/example-apache/helm-chart/README.md b/gemfeed/examples/conf/f3s/example-apache/helm-chart/README.md deleted file mode 100644 index 4eb16d4f..00000000 --- a/gemfeed/examples/conf/f3s/example-apache/helm-chart/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# Apache Helm Chart - -This chart deploys a simple Apache web server. - -## Installing the Chart - -To install the chart with the release name `my-release`, run the following command: - -```bash -helm install example-apache . --namespace test --create-namespace -```
\ No newline at end of file diff --git a/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-deployment.yaml b/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-deployment.yaml deleted file mode 100644 index 364de1da..00000000 --- a/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-deployment.yaml +++ /dev/null @@ -1,21 +0,0 @@ -# Apache HTTP Server Deployment -apiVersion: apps/v1 -kind: Deployment -metadata: - name: apache-deployment -spec: - replicas: 1 - selector: - matchLabels: - app: apache - template: - metadata: - labels: - app: apache - spec: - containers: - - name: apache - image: httpd:latest - ports: - # Container port where Apache listens - - containerPort: 80 diff --git a/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-ingress.yaml b/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-ingress.yaml deleted file mode 100644 index aa575edd..00000000 --- a/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-ingress.yaml +++ /dev/null @@ -1,40 +0,0 @@ -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: apache-ingress - namespace: test - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: 80 - - host: standby.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: 80 - - host: www.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: 80 diff --git a/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-service.yaml b/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-service.yaml deleted file mode 100644 index 93b24acb..00000000 --- a/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-service.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - app: apache - name: apache-service -spec: - ports: - - name: web - port: 80 - protocol: TCP - # Expose port 80 on the service - targetPort: 80 - selector: - # Link this service to pods with the label app=apache - app: apache diff --git a/gemfeed/examples/conf/f3s/freshrss/Justfile b/gemfeed/examples/conf/f3s/freshrss/Justfile deleted file mode 100644 index d88fe3d4..00000000 --- a/gemfeed/examples/conf/f3s/freshrss/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "services" -RELEASE_NAME := "freshrss" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/freshrss/README.md b/gemfeed/examples/conf/f3s/freshrss/README.md deleted file mode 100644 index 1a883725..00000000 --- a/gemfeed/examples/conf/f3s/freshrss/README.md +++ /dev/null @@ -1,29 +0,0 @@ -# FreshRSS Helm Chart - -This chart deploys FreshRSS using a single Deployment, Service, Ingress, and a hostPath-backed PersistentVolume/PersistentVolumeClaim for data. - -## Prerequisites - -Before installing the chart, you must manually create the hostPath directory used by the PersistentVolume (see `templates/persistent-volumes.yaml`): - -- `/data/nfs/k3svolumes/freshrss/data` - -Example commands: - -```bash -sudo mkdir -p /data/nfs/k3svolumes/freshrss/data -# Ensure write permissions for the runtime user/group (nobody:nogroup = 65534:65534) -sudo chown -R 65534:65534 /data/nfs/k3svolumes/freshrss/data -``` - -## Installing the Chart - -To install the chart with the release name `freshrss`, run: - -```bash -helm install freshrss . --namespace services --create-namespace -``` - -## Access - -- Ingress host: `freshrss.f3s.lan.buetow.org` diff --git a/gemfeed/examples/conf/f3s/freshrss/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/freshrss/helm-chart/Chart.yaml deleted file mode 100644 index 05cd76a0..00000000 --- a/gemfeed/examples/conf/f3s/freshrss/helm-chart/Chart.yaml +++ /dev/null @@ -1,6 +0,0 @@ -apiVersion: v2 -name: freshrss -description: A Helm chart for deploying FreshRSS. -version: 0.1.0 -appVersion: "latest" - diff --git a/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/deployment.yaml deleted file mode 100644 index 99f114cb..00000000 --- a/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/deployment.yaml +++ /dev/null @@ -1,48 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: freshrss - namespace: services -spec: - replicas: 1 - selector: - matchLabels: - app: freshrss - template: - metadata: - labels: - app: freshrss - spec: - securityContext: - runAsUser: 65534 # nobody - runAsGroup: 65534 # nobody / nogroup - fsGroup: 65534 # ensure mounted volumes are group-writable - runAsNonRoot: true - containers: - - name: freshrss - image: freshrss/freshrss:latest - ports: - - containerPort: 80 - volumeMounts: - - name: freshrss-data - mountPath: /var/www/FreshRSS/data - volumes: - - name: freshrss-data - persistentVolumeClaim: - claimName: freshrss-data-pvc ---- -apiVersion: v1 -kind: Service -metadata: - labels: - app: freshrss - name: freshrss-service - namespace: services -spec: - ports: - - name: web - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: freshrss diff --git a/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/ingress.yaml deleted file mode 100644 index 67409615..00000000 --- a/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/ingress.yaml +++ /dev/null @@ -1,21 +0,0 @@ -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: freshrss-ingress - namespace: services - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: freshrss.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: freshrss-service - port: - number: 80 - diff --git a/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/persistent-volumes.yaml deleted file mode 100644 index 813d2acb..00000000 --- a/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/persistent-volumes.yaml +++ /dev/null @@ -1,28 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: freshrss-data-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/freshrss/data - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: freshrss-data-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi - diff --git a/gemfeed/examples/conf/f3s/miniflux/Justfile b/gemfeed/examples/conf/f3s/miniflux/Justfile deleted file mode 100644 index 5becacfe..00000000 --- a/gemfeed/examples/conf/f3s/miniflux/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "services" -RELEASE_NAME := "miniflux" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/miniflux/README.md b/gemfeed/examples/conf/f3s/miniflux/README.md deleted file mode 100644 index 8795b457..00000000 --- a/gemfeed/examples/conf/f3s/miniflux/README.md +++ /dev/null @@ -1,56 +0,0 @@ -# Miniflux Helm Chart - -This chart deploys Miniflux. - -## Prerequisites - -Before installing the chart, you must manually create the following: - -1. **Database Password Secret:** - - Create a secret that contains only the database password. The chart reads - this value and constructs the Miniflux `DATABASE_URL` internally at runtime: - - ```bash - kubectl create secret generic miniflux-db-password \ - --from-literal=fluxdb_password='YOUR_PASSWORD' \ - -n services - ``` - - Replace `YOUR_PASSWORD` with your desired database password. You do not - need to provide a full DSN in the secret; the chart uses the password from - `fluxdb_password` to build: - - `postgres://miniflux:${POSTGRES_PASSWORD}@miniflux-postgres:5432/miniflux?sslmode=disable` - -2. **Admin Password Secret:** - - Create a secret for the initial Miniflux admin user password. The chart - reads this secret into the `ADMIN_PASSWORD` environment variable during - the first startup to create the admin user. The admin username is set - to `admin` in the deployment template. - - ```bash - kubectl create secret generic miniflux-admin-password \ - --from-literal=admin_password='YOUR_ADMIN_PASSWORD' \ - -n services - ``` - - Replace `YOUR_ADMIN_PASSWORD` with your desired password. The secret key - used by the chart is `admin_password`. - -3. **Persistent Volume Directory:** - - You must manually create the directory on your host system to be used by the persistent volume: - - ```bash - mkdir -p /data/nfs/k3svolumes/miniflux/data - ``` - -## Installing the Chart - -To install the chart with the release name `miniflux`, run the following command: - -```bash -helm install miniflux . --namespace services --create-namespace -``` diff --git a/gemfeed/examples/conf/f3s/miniflux/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/miniflux/helm-chart/Chart.yaml deleted file mode 100644 index f88e3f3d..00000000 --- a/gemfeed/examples/conf/f3s/miniflux/helm-chart/Chart.yaml +++ /dev/null @@ -1,5 +0,0 @@ -apiVersion: v2 -name: miniflux -description: A Helm chart for deploying Miniflux. -version: 0.1.0 -appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/deployment.yaml deleted file mode 100644 index 08647a73..00000000 --- a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/deployment.yaml +++ /dev/null @@ -1,92 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: miniflux-server - labels: - app: miniflux-server -spec: - replicas: 1 - selector: - matchLabels: - app: miniflux-server - template: - metadata: - labels: - app: miniflux-server - spec: - initContainers: - - name: wait-for-postgres - image: postgres:17 - command: ["/bin/sh", "-c"] - args: - - | - echo "Waiting for Postgres at miniflux-postgres:5432..."; - until pg_isready -h miniflux-postgres -p 5432 -U miniflux; do - echo "Postgres not ready, sleeping..."; - sleep 2; - done; - echo "Postgres is ready." - containers: - - name: miniflux - image: miniflux/miniflux:latest - ports: - - containerPort: 8080 - env: - - name: CREATE_ADMIN - value: "1" - - name: ADMIN_USERNAME - value: "admin" - - name: ADMIN_PASSWORD - valueFrom: - secretKeyRef: - name: miniflux-admin-password - key: admin_password - - name: RUN_MIGRATIONS - value: "1" - - name: POLLING_FREQUENCY - value: "10" - - name: POSTGRES_PASSWORD - valueFrom: - secretKeyRef: - name: miniflux-db-password - key: fluxdb_password - command: ["/bin/sh", "-c"] - args: - - export DATABASE_URL="postgres://miniflux:${POSTGRES_PASSWORD}@miniflux-postgres:5432/miniflux?sslmode=disable"; exec /usr/bin/miniflux ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: miniflux-postgres - labels: - app: miniflux-postgres -spec: - replicas: 1 - selector: - matchLabels: - app: miniflux-postgres - template: - metadata: - labels: - app: miniflux-postgres - spec: - containers: - - name: miniflux-postgres - image: postgres:17 - ports: - - containerPort: 5432 - env: - - name: POSTGRES_USER - value: "miniflux" - - name: POSTGRES_PASSWORD - valueFrom: - secretKeyRef: - name: miniflux-db-password - key: fluxdb_password - volumeMounts: - - name: miniflux-postgres-data - mountPath: /var/lib/postgresql/data - volumes: - - name: miniflux-postgres-data - persistentVolumeClaim: - claimName: miniflux-postgres-pvc diff --git a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/ingress.yaml deleted file mode 100644 index 95f18389..00000000 --- a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/ingress.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: miniflux-ingress - namespace: services - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: flux.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: miniflux - port: - number: 8080 diff --git a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/persistent-volumes.yaml deleted file mode 100644 index 2c4331c8..00000000 --- a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/persistent-volumes.yaml +++ /dev/null @@ -1,27 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: miniflux-postgres-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/miniflux/data - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: miniflux-postgres-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi diff --git a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/service.yaml b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/service.yaml deleted file mode 100644 index 6855888f..00000000 --- a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/service.yaml +++ /dev/null @@ -1,23 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: miniflux -spec: - selector: - app: miniflux-server - ports: - - protocol: TCP - port: 8080 - targetPort: 8080 ---- -apiVersion: v1 -kind: Service -metadata: - name: miniflux-postgres -spec: - selector: - app: miniflux-postgres - ports: - - protocol: TCP - port: 5432 - targetPort: 5432 diff --git a/gemfeed/examples/conf/f3s/opodsync/Justfile b/gemfeed/examples/conf/f3s/opodsync/Justfile deleted file mode 100644 index 3143637b..00000000 --- a/gemfeed/examples/conf/f3s/opodsync/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "services" -RELEASE_NAME := "opodsync" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}}
\ No newline at end of file diff --git a/gemfeed/examples/conf/f3s/opodsync/README.md b/gemfeed/examples/conf/f3s/opodsync/README.md deleted file mode 100644 index fd17938a..00000000 --- a/gemfeed/examples/conf/f3s/opodsync/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# opodsync - -This Helm chart deploys the opodsync. - -## Manual steps - -Before deploying, you need to create the following directory on your NFS share: - -```bash -mkdir -p /data/nfs/k3svolumes/opodsync/data -``` diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/Chart.yaml deleted file mode 100644 index 8d41abe1..00000000 --- a/gemfeed/examples/conf/f3s/opodsync/helm-chart/Chart.yaml +++ /dev/null @@ -1,5 +0,0 @@ -apiVersion: v2 -name: opodsync -description: A Helm chart for deploying the opodsync. -version: 0.1.0 -appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/configmap-nginx.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/configmap-nginx.yaml deleted file mode 100644 index b4c2ef62..00000000 --- a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/configmap-nginx.yaml +++ /dev/null @@ -1,46 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: opodsync-nginx-config - namespace: services -data: - nginx.conf: | - worker_processes 1; - events { worker_connections 1024; } - http { - variables_hash_bucket_size 128; - include mime.types; - default_type application/octet-stream; - sendfile on; - keepalive_timeout 65; - - upstream backend { - server 127.0.0.1:8080; - } - - server { - listen 8081; - - # Preserve client details - proxy_set_header Host $host; - proxy_set_header X-Real-IP $remote_addr; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_set_header X-Forwarded-Proto $scheme; - - # Root path internally proxies to /gpodder on backend - location = / { - proxy_pass http://backend/gpodder; - } - - # Pass through existing /gpodder paths unchanged - location /gpodder { - proxy_pass http://backend; - } - - # Fallback: proxy everything else as-is - location / { - proxy_pass http://backend; - } - } - } - diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/deployment.yaml deleted file mode 100644 index b0f11d9e..00000000 --- a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/deployment.yaml +++ /dev/null @@ -1,43 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: opodsync - namespace: services -spec: - replicas: 1 - selector: - matchLabels: - app: opodsync - template: - metadata: - labels: - app: opodsync - spec: - containers: - - name: opodsync - image: ganeshlab/opodsync - env: - - name: GPODDER_BASE_URL - value: "https://gpodder.f3s.buetow.org/gpodder" - - name: GPODDER_ALLOW_REGISTRATIONS - value: "true" - ports: - - containerPort: 8080 - volumeMounts: - - name: opodsync-data - mountPath: /var/www/server/data - - name: nginx-proxy - image: nginx:1.25-alpine - ports: - - containerPort: 8081 - volumeMounts: - - name: nginx-config - mountPath: /etc/nginx/nginx.conf - subPath: nginx.conf - volumes: - - name: opodsync-data - persistentVolumeClaim: - claimName: opodsync-data-pvc - - name: nginx-config - configMap: - name: opodsync-nginx-config diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/ingress.yaml deleted file mode 100644 index a29d27bf..00000000 --- a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/ingress.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: opodsync-ingress - namespace: services - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: gpodder.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: opodsync-service - port: - number: 80 diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/persistent-volumes.yaml deleted file mode 100644 index 0a6dedc0..00000000 --- a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/persistent-volumes.yaml +++ /dev/null @@ -1,27 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: opodsync-data-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/opodsync/data - type: DirectoryOrCreate ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: opodsync-data-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi
\ No newline at end of file diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/service.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/service.yaml deleted file mode 100644 index 16763f03..00000000 --- a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/service.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - app: opodsync - name: opodsync-service - namespace: services -spec: - ports: - - name: web - port: 80 - protocol: TCP - targetPort: 8081 - selector: - app: opodsync diff --git a/gemfeed/examples/conf/f3s/radicale/Justfile b/gemfeed/examples/conf/f3s/radicale/Justfile deleted file mode 100644 index 6be7406a..00000000 --- a/gemfeed/examples/conf/f3s/radicale/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "services" -RELEASE_NAME := "radicale" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/radicale/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/radicale/helm-chart/Chart.yaml deleted file mode 100644 index 421dd485..00000000 --- a/gemfeed/examples/conf/f3s/radicale/helm-chart/Chart.yaml +++ /dev/null @@ -1,5 +0,0 @@ -apiVersion: v2 -name: radicale -description: A Helm chart for deploying a gpodder sync server. -version: 0.1.0 -appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/radicale/helm-chart/README.md b/gemfeed/examples/conf/f3s/radicale/helm-chart/README.md deleted file mode 100644 index 6f4f28f7..00000000 --- a/gemfeed/examples/conf/f3s/radicale/helm-chart/README.md +++ /dev/null @@ -1,18 +0,0 @@ -# Radicale Helm Chart - -This chart deploys a gpodder sync server using Radicale. - -## Prerequisites - -Before installing the chart, you must manually create the following directories on your host system to be used by the persistent volumes: - -- `/data/nfs/k3svolumes/radicale/collections` -- `/data/nfs/k3svolumes/radicale/auth` - -## Installing the Chart - -To install the chart with the release name `radicale`, run the following command: - -```bash -helm install radicale . --namespace services --create-namespace -``` diff --git a/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/deployment.yaml deleted file mode 100644 index 725fcba1..00000000 --- a/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/deployment.yaml +++ /dev/null @@ -1,67 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: radicale - namespace: services -spec: - replicas: 1 - selector: - matchLabels: - app: radicale - template: - metadata: - labels: - app: radicale - spec: - initContainers: - - name: debug-auth-and-mounts - image: busybox:1.36 - command: ["/bin/sh", "-c"] - args: - - | - set -eu - echo "=== /proc/mounts ===" && cat /proc/mounts || true - echo "=== df -h ===" && df -h || true - echo "=== ls -lna / ===" && ls -lna / || true - echo "=== ls -lna /auth ===" && ls -lna /auth || true - echo "=== ls -lna /collections ===" && ls -lna /collections || true - echo "=== find /auth (maxdepth 2) ===" && find /auth -maxdepth 2 || true - [ -f /auth/htpasswd ] && { echo "=== stat /auth/htpasswd ==="; stat /auth/htpasswd || true; } || echo "htpasswd missing in init" - volumeMounts: - - name: radicale-collections - mountPath: /collections - - name: radicale-auth - mountPath: /auth - containers: - - name: radicale - image: registry.lan.buetow.org:30001/radicale:latest - ports: - - containerPort: 8080 - volumeMounts: - - name: radicale-collections - mountPath: /collections - - name: radicale-auth - mountPath: /auth - volumes: - - name: radicale-collections - persistentVolumeClaim: - claimName: radicale-collections-pvc - - name: radicale-auth - persistentVolumeClaim: - claimName: radicale-auth-pvc ---- -apiVersion: v1 -kind: Service -metadata: - labels: - app: radicale - name: radicale-service - namespace: services -spec: - ports: - - name: web - port: 80 - protocol: TCP - targetPort: 8080 - selector: - app: radicale diff --git a/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/ingress.yaml deleted file mode 100644 index 680ab7d8..00000000 --- a/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/ingress.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: radicale-ingress - namespace: services - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: radicale.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: radicale-service - port: - number: 80 diff --git a/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/persistent-volumes.yaml deleted file mode 100644 index 95d64883..00000000 --- a/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/persistent-volumes.yaml +++ /dev/null @@ -1,55 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: radicale-collections-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/radicale/collections - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: radicale-collections-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: radicale-auth-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/radicale/auth - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: radicale-auth-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi diff --git a/gemfeed/examples/conf/f3s/registry/Justfile b/gemfeed/examples/conf/f3s/registry/Justfile deleted file mode 100644 index 297d95a7..00000000 --- a/gemfeed/examples/conf/f3s/registry/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "infra" -RELEASE_NAME := "registry" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/registry/README.md b/gemfeed/examples/conf/f3s/registry/README.md deleted file mode 100644 index bcf30a3a..00000000 --- a/gemfeed/examples/conf/f3s/registry/README.md +++ /dev/null @@ -1,69 +0,0 @@ -# Private Docker Registry - -This document describes how to push Docker images to the private registry deployed in your Kubernetes cluster. - -## Prerequisites - -* A running Kubernetes cluster. -* `kubectl` configured to connect to your cluster. -* Docker installed and running on your local machine. - -## Steps - -0. **Create the registry directory in the NFS share** - -1. **Tag your Docker image:** - - Replace `<your-image>` with the name of your local Docker image and `<node-ip>` with the IP address of any node in your Kubernetes cluster. The registry is available on NodePort `30001`. - - ```bash - docker tag <your-image> <node-ip>:30001/<your-image> - ``` - -2. **Push the image to the registry:** - - ```bash - docker push <node-ip>:30001/<your-image> - ``` - -3. **Pull the image from the registry (from a Kubernetes pod):** - - You can now use the image in your Kubernetes deployments by referencing it as `docker-registry-service:5000/<your-image>`. - -## Communication - -The Docker registry is exposed via a static NodePort (`30001`) and uses plain HTTP. It is not configured for TLS. - - - First, run this command to create or update the configuration file. This command will overwrite the file if it exists. - - 1 sudo bash -c 'echo "{ \\"insecure-registries\\": [\\"r0.lan.buetow.org:30001\\",\\"r1.lan.buetow.org:30001\\",\\"r2.lan.buetow.org:30001\\"] }" > /etc/docker/daemon.json' - - After running that command, you need to restart your Docker daemon for the changes to take effect. - - 1 sudo systemctl restart docker - - -And afterwards I could push the anky-sync-server image. - -## K3s Configuration - -To use the private registry from within the k3s cluster, you need to configure each k3s node. - -### 1. Update /etc/hosts -On each k3s node, you must ensure that `registry.lan.buetow.org` resolves to the node's loopback address. You can do this by adding an entry to the `/etc/hosts` file. - -Run the following command, which will add the entry to `r0`, `r1`, and `r2`: -```bash -for node in r0 r1 r2; do ssh root@$node "echo '127.0.0.1 registry.lan.buetow.org' >> /etc/hosts"; done -``` - -### 2. Configure K3s to trust the insecure registry -You need to configure each k3s node to trust the insecure registry. This is done by creating a `registries.yaml` file in `/etc/rancher/k3s/` on each node. - -The following command will create the file and restart the k3s service. You will need to run this for each node (`r0`, `r1`, `r2`): - -```bash -ssh root@<node> "echo -e 'mirrors:\n "registry.lan.buetow.org:30001":\n endpoint:\n - "http://localhost:30001"' > /etc/rancher/k3s/registries.yaml && systemctl restart k3s" -``` - diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/registry/helm-chart/Chart.yaml deleted file mode 100644 index 0f7d68fa..00000000 --- a/gemfeed/examples/conf/f3s/registry/helm-chart/Chart.yaml +++ /dev/null @@ -1,5 +0,0 @@ -apiVersion: v2 -name: registry -description: A Helm chart for deploying a private Docker registry. -version: 0.1.0 -appVersion: "2.0" diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/README.md b/gemfeed/examples/conf/f3s/registry/helm-chart/README.md deleted file mode 100644 index 42694360..00000000 --- a/gemfeed/examples/conf/f3s/registry/helm-chart/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# Docker Registry Helm Chart - -This chart deploys a simple Docker registry. - -## Installing the Chart - -To install the chart with the release name `my-release`, run the following command: - -```bash -helm install registry . -``` diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/deployment.yaml deleted file mode 100644 index 70522f8d..00000000 --- a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/deployment.yaml +++ /dev/null @@ -1,29 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: docker-registry - namespace: infra - labels: - app: docker-registry -spec: - replicas: 1 - selector: - matchLabels: - app: docker-registry - template: - metadata: - labels: - app: docker-registry - spec: - containers: - - name: registry - image: registry:2 - ports: - - containerPort: 5000 - volumeMounts: - - name: registry-storage - mountPath: /var/lib/registry - volumes: - - name: registry-storage - persistentVolumeClaim: - claimName: docker-registry-pvc diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pv.yaml b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pv.yaml deleted file mode 100644 index fb747ca0..00000000 --- a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pv.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: docker-registry-pv -spec: - capacity: - storage: 5Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/registry - type: Directory diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pvc.yaml b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pvc.yaml deleted file mode 100644 index e769c893..00000000 --- a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pvc.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: docker-registry-pvc - namespace: infra -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5Gi diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/service.yaml b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/service.yaml deleted file mode 100644 index a97f14e0..00000000 --- a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/service.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: docker-registry-service - namespace: infra -spec: - selector: - app: docker-registry - ports: - - protocol: TCP - port: 5000 - targetPort: 5000 - nodePort: 30001 - type: NodePort diff --git a/gemfeed/examples/conf/f3s/syncthing/Justfile b/gemfeed/examples/conf/f3s/syncthing/Justfile deleted file mode 100644 index 4be94ee2..00000000 --- a/gemfeed/examples/conf/f3s/syncthing/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "services" -RELEASE_NAME := "syncthing" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/syncthing/README.md b/gemfeed/examples/conf/f3s/syncthing/README.md deleted file mode 100644 index 3e2344ab..00000000 --- a/gemfeed/examples/conf/f3s/syncthing/README.md +++ /dev/null @@ -1,20 +0,0 @@ -# Syncthing Kubernetes Deployment - -This directory contains the Kubernetes configuration for deploying Syncthing. - -## Deployment - -To deploy Syncthing, apply the Kubernetes manifests in this directory: - -```bash -make apply -``` - -## Configuration - -The deployment uses two persistent volumes: -- `syncthing-config-pv`: for the syncthing configuration. Mapped to `/data/nfs/k3svolumes/syncthing/config` on the host. -- `syncthing-data-pv`: for the syncthing data. Mapped to `/data/nfs/k3svolumes/syncthing/data` on the host. - -The web UI is available at http://syncthing.f3s.buetow.org. -The data port is exposed on port 22000. diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/syncthing/helm-chart/Chart.yaml deleted file mode 100644 index 2b982524..00000000 --- a/gemfeed/examples/conf/f3s/syncthing/helm-chart/Chart.yaml +++ /dev/null @@ -1,5 +0,0 @@ -apiVersion: v2 -name: syncthing -description: A Helm chart for deploying Syncthing. -version: 0.1.0 -appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/README.md b/gemfeed/examples/conf/f3s/syncthing/helm-chart/README.md deleted file mode 100644 index 0cc23919..00000000 --- a/gemfeed/examples/conf/f3s/syncthing/helm-chart/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# Syncthing Helm Chart - -This chart deploys Syncthing. - -## Installing the Chart - -To install the chart with the release name `my-release`, run the following command: - -```bash -helm install syncthing . --namespace services --create-namespace -``` diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/deployment.yaml deleted file mode 100644 index 9a85a174..00000000 --- a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/deployment.yaml +++ /dev/null @@ -1,33 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: syncthing - namespace: services -spec: - replicas: 1 - selector: - matchLabels: - app: syncthing - template: - metadata: - labels: - app: syncthing - spec: - containers: - - name: syncthing - image: lscr.io/linuxserver/syncthing:latest - ports: - - containerPort: 8384 - - containerPort: 22000 - volumeMounts: - - name: syncthing-config - mountPath: /config - - name: syncthing-data - mountPath: /data - volumes: - - name: syncthing-config - persistentVolumeClaim: - claimName: syncthing-config-pvc - - name: syncthing-data - persistentVolumeClaim: - claimName: syncthing-data-pvc diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/ingress.yaml deleted file mode 100644 index b1e68e1f..00000000 --- a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/ingress.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: syncthing-ingress - namespace: services - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: syncthing.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: syncthing-service - port: - number: 8384 diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/persistent-volume.yaml b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/persistent-volume.yaml deleted file mode 100644 index 793ae608..00000000 --- a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/persistent-volume.yaml +++ /dev/null @@ -1,55 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: syncthing-config-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/syncthing/config - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: syncthing-config-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: syncthing-data-pv -spec: - capacity: - storage: 300Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/syncthing/data - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: syncthing-data-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 300Gi
\ No newline at end of file diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/service.yaml b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/service.yaml deleted file mode 100644 index 74bf5ed4..00000000 --- a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/service.yaml +++ /dev/null @@ -1,19 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - app: syncthing - name: syncthing-service - namespace: services -spec: - ports: - - name: web - port: 8384 - protocol: TCP - targetPort: 8384 - - name: data - port: 22000 - protocol: TCP - targetPort: 22000 - selector: - app: syncthing diff --git a/gemfeed/examples/conf/f3s/wallabag/Justfile b/gemfeed/examples/conf/f3s/wallabag/Justfile deleted file mode 100644 index 6c3a8818..00000000 --- a/gemfeed/examples/conf/f3s/wallabag/Justfile +++ /dev/null @@ -1,12 +0,0 @@ -NAMESPACE := "services" -RELEASE_NAME := "wallabag" -CHART_PATH := "./helm-chart" - -install: - helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace - -upgrade: - helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} - -delete: - helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/wallabag/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/wallabag/helm-chart/Chart.yaml deleted file mode 100644 index 2fb05aba..00000000 --- a/gemfeed/examples/conf/f3s/wallabag/helm-chart/Chart.yaml +++ /dev/null @@ -1,5 +0,0 @@ -apiVersion: v2 -name: wallabag -description: A Helm chart for deploying Wallabag. -version: 0.1.0 -appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/wallabag/helm-chart/README.md b/gemfeed/examples/conf/f3s/wallabag/helm-chart/README.md deleted file mode 100644 index 5db600b9..00000000 --- a/gemfeed/examples/conf/f3s/wallabag/helm-chart/README.md +++ /dev/null @@ -1,18 +0,0 @@ -# Wallabag Helm Chart - -This chart deploys Wallabag. - -## Prerequisites - -Before installing the chart, you must manually create the following directories on your host system to be used by the persistent volumes: - -- `/data/nfs/k3svolumes/wallabag/data` -- `/data/nfs/k3svolumes/wallabag/images` - -## Installing the Chart - -To install the chart with the release name `my-release`, run the following command: - -```bash -helm install wallabag . --namespace services --create-namespace -``` diff --git a/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/deployment.yaml deleted file mode 100644 index 25dcffdc..00000000 --- a/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/deployment.yaml +++ /dev/null @@ -1,51 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: wallabag - namespace: services -spec: - replicas: 1 - selector: - matchLabels: - app: wallabag - template: - metadata: - labels: - app: wallabag - spec: - containers: - - name: wallabag - image: wallabag/wallabag - ports: - - containerPort: 80 - env: - - name: SYMFONY__ENV__DOMAIN_NAME - value: "https://bag.f3s.buetow.org" - volumeMounts: - - name: wallabag-data - mountPath: /var/www/wallabag/data - - name: wallabag-images - mountPath: /var/www/wallabag/web/assets/images - volumes: - - name: wallabag-data - persistentVolumeClaim: - claimName: wallabag-data-pvc - - name: wallabag-images - persistentVolumeClaim: - claimName: wallabag-images-pvc ---- -apiVersion: v1 -kind: Service -metadata: - labels: - app: wallabag - name: wallabag-service - namespace: services -spec: - ports: - - name: web - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: wallabag diff --git a/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/ingress.yaml deleted file mode 100644 index deb489aa..00000000 --- a/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/ingress.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: wallabag-ingress - namespace: services - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: bag.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: wallabag-service - port: - number: 80 diff --git a/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/persistent-volumes.yaml deleted file mode 100644 index 6f5346aa..00000000 --- a/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/persistent-volumes.yaml +++ /dev/null @@ -1,55 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: wallabag-data-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/wallabag/data - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: wallabag-data-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: wallabag-images-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/wallabag/images - type: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: wallabag-images-pvc - namespace: services -spec: - storageClassName: "" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi diff --git a/gemfeed/examples/conf/frontends/README.md b/gemfeed/examples/conf/frontends/README.md deleted file mode 100644 index e2d59d95..00000000 --- a/gemfeed/examples/conf/frontends/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Frontends - -Rexify my internet facing frontend servers! diff --git a/gemfeed/examples/conf/frontends/Rexfile b/gemfeed/examples/conf/frontends/Rexfile deleted file mode 100644 index 0079387e..00000000 --- a/gemfeed/examples/conf/frontends/Rexfile +++ /dev/null @@ -1,648 +0,0 @@ -# How to use: -# -# rex commons -# -# Why use Rex to automate my servers? Because Rex is KISS, Puppet, SALT and Chef -# are not. So, why not use Ansible then? To use Ansible correctly you should also -# install Python on the target machines (not mandatory, though. But better). -# Rex is programmed in Perl and there is already Perl in the base system of OpenBSD. -# Also, I find Perl > Python (my personal opinion). - -use Rex -feature => [ '1.14', 'exec_autodie' ]; -use Rex::Logger; -use File::Slurp; - -# REX CONFIG SECTION - -group frontends => 'blowfish.buetow.org:2', 'fishfinger.buetow.org:2'; -our $ircbouncer_server = 'fishfinger.buetow.org:2'; -group ircbouncer => $ircbouncer_server; -group openbsd_canary => 'fishfinger.buetow.org:2'; - -user 'rex'; -sudo TRUE; - -parallelism 5; - -# CUSTOM (PERL-ish) CONFIG SECTION (what Rex can't do by itself) -# Note we using anonymous subs here. This is so we can pass the subs as -# Rex template variables too. - -our %ips = ( - 'fishfinger' => { - 'ipv4' => '46.23.94.99', - 'ipv6' => '2a03:6000:6f67:624::99', - }, - 'blowfish' => { - 'ipv4' => '23.88.35.144', - 'ipv6' => '2a01:4f8:c17:20f1::42', - }, - 'domain' => 'buetow.org', -); - -$ips{current_master} = $ips{fishfinger}; -$ips{current_master}{fqdn} = 'fishfinger.' . $ips{domain}; - -$ips{current_standby} = $ips{blowfish}; -$ips{current_standby}{fqdn} = 'blowfish.' . $ips{domain}; - -# Gather IPv6 addresses based on hostname. -our $ipv6address = sub { - my $hostname = shift; - my $ip = $ips{$hostname}{ipv6}; - unless ( defined $ip ) { - Rex::Logger::info( "Unable to determine IPv6 address for $hostname", 'error' ); - return '::1'; - } - return $ip; -}; - -# Bootstrapping the FQDN based on the server IP as the hostname and domain -# facts aren't set yet due to the myname file in the first place. -our $fqdns = sub { - my $ipv4 = shift; - while ( my ( $hostname, $ips ) = each %ips ) { - return "$hostname." . $ips{domain} if $ips->{ipv4} eq $ipv4; - } - Rex::Logger::info( "Unable to determine hostname for $ipv4", 'error' ); - return 'HOSTNAME-UNKNOWN.' . $ips{domain}; -}; - -# TODO: Rename rexfilesecrets.txt to confsecrets.txt?! Or wait for RCM migration. -# The secret store. Note to myself: "geheim cat rexfilesecrets.txt" -our $secrets = sub { read_file './secrets/' . shift }; - -our @dns_zones = qw/buetow.org dtail.dev foo.zone irregular.ninja snonux.foo paul.cyou/; -our @dns_zones_remove = qw//; - -# k3s cluster running on FreeBSD in my LAN -our @f3s_hosts = - qw/f3s.buetow.org anki.f3s.buetow.org bag.f3s.buetow.org flux.f3s.buetow.org audiobookshelf.f3s.buetow.org gpodder.f3s.buetow.org radicale.f3s.buetow.org vault.f3s.buetow.org syncthing.f3s.buetow.org uprecords.f3s.buetow.org/; - -# optionally, only enable manually for temp time, as no password protection yet -# push @f3s_hosts, 'registry.f3s.buetow.org'; - -our @acme_hosts = - qw/buetow.org git.buetow.org paul.buetow.org joern.buetow.org dory.buetow.org ecat.buetow.org blog.buetow.org fotos.buetow.org znc.buetow.org dtail.dev foo.zone stats.foo.zone irregular.ninja alt.irregular.ninja snonux.foo/; -push @acme_hosts, @f3s_hosts; - -# UTILITY TASKS - -task 'id', group => 'frontends', sub { say run 'id' }; -task 'dump_info', group => 'frontends', sub { dump_system_information }; - -# OPENBSD TASKS SECTION - -desc 'Install base stuff'; -task 'base', - group => 'frontends', - sub { - pkg 'figlet', ensure => present; - pkg 'tig', ensure => present; - pkg 'vger', ensure => present; - pkg 'zsh', ensure => present; - pkg 'bash', ensure => present; - pkg 'helix', ensure => present; - - my @pkg_scripts = qw/uptimed httpd dserver icinga2/; - push @pkg_scripts, 'znc' if connection->server eq $ircbouncer_server; - my $pkg_scripts = join ' ', @pkg_scripts; - append_if_no_such_line '/etc/rc.conf.local', "pkg_scripts=\"$pkg_scripts\""; - run 'touch /etc/rc.local'; - - file '/etc/myname', - content => template( './etc/myname.tpl', fqdns => $fqdns ), - owner => 'root', - group => 'wheel', - mode => '644'; - }; - -desc 'Setup uptimed'; -task 'uptimed', - group => 'frontends', - sub { - pkg 'uptimed', ensure => present; - service 'uptimed', ensure => 'started'; - }; - -desc 'Setup rsync'; -task 'rsync', - group => 'frontends', - sub { - pkg 'rsync', ensure => present; - - # Not required, as we use rsyncd via inetd - # append_if_no_such_line '/etc/rc.conf.local', 'rsyncd_flags='; - - file '/etc/rsyncd.conf', - content => template('./etc/rsyncd.conf.tpl'), - owner => 'root', - group => 'wheel', - mode => '644'; - - file '/usr/local/bin/rsync.sh', - content => template('./scripts/rsync.sh.tpl'), - owner => 'root', - group => 'wheel', - mode => '755'; - - file '/tmp/rsync.cron', - ensure => 'file', - content => "*/5\t*\t*\t*\t*\t-ns /usr/local/bin/rsync.sh", - mode => '600'; - - run '{ crontab -l -u root ; cat /tmp/rsync.cron; } | uniq | crontab -u root -'; - run 'rm /tmp/rsync.cron'; - }; - -desc 'Configure the gemtexter sites'; -task 'gemtexter', - group => 'frontends', - sub { - file '/usr/local/bin/gemtexter.sh', - content => template('./scripts/gemtexter.sh.tpl'), - owner => 'root', - group => 'wheel', - mode => '744'; - - file '/etc/daily.local', - ensure => 'present', - owner => 'root', - group => 'wheel', - mode => '644'; - - append_if_no_such_line '/etc/daily.local', '/usr/local/bin/gemtexter.sh'; - }; - -desc 'Configure taskwarrior reminder'; -task 'taskwarrior', - group => 'frontends', - sub { - pkg 'taskwarrior', ensure => present; - - file '/usr/local/bin/taskwarrior.sh', - content => template('./scripts/taskwarrior.sh.tpl'), - owner => 'root', - group => 'wheel', - mode => '500'; - - file '/etc/taskrc', - content => template('./etc/taskrc.tpl'), - owner => 'root', - group => 'wheel', - mode => '600'; - - append_if_no_such_line '/etc/daily.local', '/usr/local/bin/taskwarrior.sh'; - }; - -desc 'Configure ACME client'; -task 'acme', - group => 'frontends', - sub { - file '/etc/acme-client.conf', - content => template( './etc/acme-client.conf.tpl', acme_hosts => \@acme_hosts ), - owner => 'root', - group => 'wheel', - mode => '644'; - - file '/usr/local/bin/acme.sh', - content => template( './scripts/acme.sh.tpl', acme_hosts => \@acme_hosts ), - owner => 'root', - group => 'wheel', - mode => '744'; - - file '/etc/daily.local', - ensure => 'present', - owner => 'root', - group => 'wheel', - mode => '644'; - - append_if_no_such_line '/etc/daily.local', '/usr/local/bin/acme.sh'; - }; - -desc 'Invoke ACME client'; -task 'acme_invoke', - group => 'frontends', - sub { - say run '/usr/local/bin/acme.sh'; - }; - -desc 'Setup httpd'; -task 'httpd', - group => 'frontends', - sub { - append_if_no_such_line '/etc/rc.conf.local', 'httpd_flags='; - - file '/etc/httpd.conf', - content => template( './etc/httpd.conf.tpl', acme_hosts => \@acme_hosts ), - owner => 'root', - group => 'wheel', - mode => '644', - on_change => sub { service 'httpd' => 'restart' }; - - file '/var/www/htdocs/buetow.org', ensure => 'directory'; - file '/var/www/htdocs/buetow.org/self', ensure => 'directory'; - - # For failover health-check. - file '/var/www/htdocs/buetow.org/self/index.txt', - ensure => 'file', - content => template('./var/www/htdocs/buetow.org/self/index.txt.tpl'); - - service 'httpd', ensure => 'started'; - }; - -desc 'Setup inetd'; -task 'inetd', - group => 'frontends', - sub { - append_if_no_such_line '/etc/rc.conf.local', 'inetd_flags='; - - file '/etc/login.conf.d/inetd', - source => './etc/login.conf.d/inetd', - owner => 'root', - group => 'wheel', - mode => '644'; - - file '/etc/inetd.conf', - source => './etc/inetd.conf', - owner => 'root', - group => 'wheel', - mode => '644', - on_change => sub { service 'inetd' => 'restart' }; - - service 'inetd', ensure => 'started'; - }; - -desc 'Setup relayd'; -task 'relayd', - group => 'frontends', - sub { - append_if_no_such_line '/etc/rc.conf.local', 'relayd_flags='; - - file '/etc/relayd.conf', - content => template( - './etc/relayd.conf.tpl', - ipv6address => $ipv6address, - f3s_hosts => \@f3s_hosts, - acme_hosts => \@acme_hosts - ), - owner => 'root', - group => 'wheel', - mode => '600', - on_change => sub { service 'relayd' => 'restart' }; - - service 'relayd', ensure => 'started'; - append_if_no_such_line '/etc/daily.local', '/usr/sbin/rcctl start relayd'; - }; - -desc 'Setup OpenSMTPD'; -task 'smtpd', - group => 'frontends', - sub { - Rex::Logger::info('Dealing with mail aliases'); - file '/etc/mail/aliases', - source => './etc/mail/aliases', - owner => 'root', - group => 'wheel', - mode => '644', - on_change => sub { say run 'newaliases' }; - - Rex::Logger::info('Dealing with mail virtual domains'); - file '/etc/mail/virtualdomains', - source => './etc/mail/virtualdomains', - owner => 'root', - group => 'wheel', - mode => '644', - on_change => sub { service 'smtpd' => 'restart' }; - - Rex::Logger::info('Dealing with mail virtual users'); - file '/etc/mail/virtualusers', - source => './etc/mail/virtualusers', - owner => 'root', - group => 'wheel', - mode => '644', - on_change => sub { service 'smtpd' => 'restart' }; - - Rex::Logger::info('Dealing with smtpd.conf'); - file '/etc/mail/smtpd.conf', - content => template('./etc/mail/smtpd.conf.tpl'), - owner => 'root', - group => 'wheel', - mode => '644', - on_change => sub { service 'smtpd' => 'restart' }; - - service 'smtpd', ensure => 'started'; - }; - -desc 'Setup DNS server(s)'; -task 'nsd', - group => 'frontends', - sub { - my $restart = FALSE; - append_if_no_such_line '/etc/rc.conf.local', 'nsd_flags='; - - Rex::Logger::info('Dealing with master DNS key'); - file '/var/nsd/etc/key.conf', - content => template( './var/nsd/etc/key.conf.tpl', nsd_key => $secrets->('/var/nsd/etc/nsd_key.txt') ), - owner => 'root', - group => '_nsd', - mode => '640', - on_change => sub { $restart = TRUE }; - - Rex::Logger::info('Dealing with master DNS config'); - file '/var/nsd/etc/nsd.conf', - content => template( './var/nsd/etc/nsd.conf.master.tpl', dns_zones => \@dns_zones, ), - owner => 'root', - group => '_nsd', - mode => '640', - on_change => sub { $restart = TRUE }; - - for my $zone (@dns_zones) { - Rex::Logger::info("Dealing with DNS zone $zone"); - file "/var/nsd/zones/master/$zone.zone", - content => template( - "./var/nsd/zones/master/$zone.zone.tpl", - ips => \%ips, - f3s_hosts => \@f3s_hosts - ), - owner => 'root', - group => 'wheel', - mode => '644', - on_change => sub { $restart = TRUE }; - } - - for my $zone (@dns_zones_remove) { - Rex::Logger::info("Dealing with DNS zone removal $zone"); - file "/var/nsd/zones/master/$zone.zone", ensure => 'absent'; - } - - service 'nsd' => 'restart' if $restart; - service 'nsd', ensure => 'started'; - }; - -desc 'Setup DNS failover script(s)'; -task 'nsd_failover', - group => 'frontends', - sub { - file '/usr/local/bin/dns-failover.ksh', - source => './scripts/dns-failover.ksh', - owner => 'root', - group => 'wheel', - mode => '500'; - - file '/tmp/root.cron', - ensure => 'file', - content => "*\t*\t*\t*\t*\t-ns /usr/local/bin/dns-failover.ksh", - mode => '600'; - - run '{ crontab -l -u root ; cat /tmp/root.cron; } | uniq | crontab -u root -'; - run 'rm /tmp/root.cron'; - }; - -desc 'Setup DTail'; -task 'dtail', - group => 'frontends', - sub { - my $restart = FALSE; - - run 'adduser -class nologin -group _dserver -batch _dserver', unless => 'id _dserver'; - run 'usermod -d /var/run/dserver _dserver'; - - file '/etc/rc.d/dserver', - content => template('./etc/rc.d/dserver.tpl'), - owner => 'root', - group => 'wheel', - mode => '755', - on_change => sub { $restart = TRUE }; - - file '/etc/dserver', - ensure => 'directory', - owner => 'root', - group => 'wheel', - mode => '755'; - - file '/etc/dserver/dtail.json', - content => template('./etc/dserver/dtail.json.tpl'), - owner => 'root', - group => 'wheel', - mode => '755', - on_change => sub { $restart = TRUE }; - - file '/usr/local/bin/dserver-update-key-cache.sh', - content => template('./scripts/dserver-update-key-cache.sh.tpl'), - owner => 'root', - group => 'wheel', - mode => '500'; - - append_if_no_such_line '/etc/daily.local', '/usr/local/bin/dserver-update-key-cache.sh'; - - service 'dserver' => 'restart' if $restart; - service 'dserver', ensure => 'started'; - }; - -desc 'Installing Gogios binary'; -task 'gogios_install', - group => 'frontends', - sub { - file '/usr/local/bin/gogios', - source => 'usr/local/bin/gogios', - mode => '0755'; - owner => 'root', - group => 'root'; - }; - -desc 'Setup Gogios monitoring system'; -task 'gogios', - group => 'frontends', - sub { - pkg 'monitoring-plugins', ensure => present; - pkg 'nrpe', ensure => present; - - my $gogios_path = '/usr/local/bin/gogios'; - - unless ( is_file($gogios_path) ) { - Rex::Logger::info( "Gogios not installed to $gogios_path! Run task 'gogios_install'", 'error' ); - } - - run 'adduser -group _gogios -batch _gogios', unless => 'id _gogios'; - run 'usermod -d /var/run/gogios _gogios'; - - file '/etc/gogios.json', - content => template( './etc/gogios.json.tpl', acme_hosts => \@acme_hosts ), - owner => 'root', - group => 'wheel', - mode => '744'; - - file '/var/run/gogios', - ensure => 'directory', - owner => '_gogios', - group => '_gogios', - mode => '755'; - - file '/tmp/gogios.cron', - ensure => 'file', - content => template( './etc/gogios.cron.tpl', gogios_path => $gogios_path ), - mode => '600'; - - run 'cat /tmp/gogios.cron | crontab -u _gogios -'; - run 'rm /tmp/gogios.cron'; - - append_if_no_such_line '/etc/rc.local', 'if [ ! -d /var/run/gogios ]; then mkdir /var/run/gogios; fi'; - append_if_no_such_line '/etc/rc.local', 'chown _gogios /var/run/gogios'; - }; - -use Rex::Commands::Cron; - -desc 'Cron test'; -task 'cron_test', - group => 'openbsd_canary', - sub { - cron - add => '_gogios', - { - minute => '5', - hour => '*', - command => '/bin/ls', - }; - }; - -desc 'Installing Gorum binary'; -task 'gorum_install', - group => 'frontends', - sub { - file '/usr/local/bin/gorum', - source => 'usr/local/bin/gorum', - mode => '0755'; - owner => 'root', - group => 'root'; - }; - -desc 'Setup Gorum quorum system'; -task 'gorum', - group => 'frontends', - sub { - my $restart = FALSE; - my $gorum_path = '/usr/local/bin/gorum'; - - unless ( is_file($gorum_path) ) { - Rex::Logger::info( "gorum not installed to $gorum_path! Run task 'gorum_install'", 'error' ); - } - - run 'adduser -class nologin -group _gorum -batch _gorum', unless => 'id _gorum'; - run 'usermod -d /var/run/gorum _gorum'; - - file '/etc/gorum.json', - content => template('./etc/gorum.json.tpl'), - owner => 'root', - group => 'wheel', - mode => '744', - on_change => sub { $restart = TRUE }; - - file '/var/run/gorum', - ensure => 'directory', - owner => '_gorum', - group => '_gorum', - mode => '755'; - - file '/etc/rc.d/gorum', - content => template('./etc/rc.d/gorum.tpl'), - owner => 'root', - group => 'wheel', - mode => '755', - on_change => sub { $restart = TRUE }; - - service 'gorum' => 'restart' if $restart; - service 'gorum', ensure => 'started'; - }; - -desc 'Setup Foostats'; -task 'foostats', - group => 'frontends', - sub { - use File::Copy; - for my $file (qw/foostats.pl fooodds.txt/) { - Rex::Logger::info("Dealing with $file"); - my $git_script_path = $ENV{HOME} . '/git/foostats/' . $file; - copy( $git_script_path, './scripts/' . $file ) if -f $git_script_path; - } - - file '/usr/local/bin/foostats.pl', - source => './scripts/foostats.pl', - owner => 'root', - group => 'wheel', - mode => '500'; - - file '/var/www/htdocs/buetow.org/self/foostats/fooodds.txt', - source => './scripts/fooodds.txt', - owner => 'root', - group => 'wheel', - mode => '440'; - - file '/var/www/htdocs/gemtexter/stats.foo.zone', - ensure => 'directory', - owner => 'root', - group => 'wheel', - mode => '755'; - - file '/var/gemini/stats.foo.zone', - ensure => 'directory', - owner => 'root', - group => 'wheel', - mode => '755'; - - append_if_no_such_line '/etc/daily.local', 'perl /usr/local/bin/foostats.pl --parse-logs --replicate --report'; - - my @deps = qw(p5-Digest-SHA3 p5-PerlIO-gzip p5-JSON p5-String-Util p5-LWP-Protocol-https); - pkg $_, ensure => present for @deps; - - # For now, custom syslog config only required for foostats (to keep some logs for longer) - # Later, could move out to a separate task here in the Rexfile. - file '/etc/newsyslog.conf', - source => './etc/newsyslog.conf', - owner => 'root', - group => 'wheel', - mode => '644'; - }; - -desc 'Setup IRC bouncer'; -task 'ircbouncer', - group => 'ircbouncer', - sub { - pkg 'znc', ensure => present; - - # Requires runtime config in /var/znc before it can start. - # => geheim search znc.conf - service 'znc', ensure => 'started'; - }; - -# COMBINED TASKS SECTION - -desc 'Common configs of all hosts'; -task 'commons', - group => 'frontends', - sub { - run_task 'base'; - run_task 'nsd'; - run_task 'nsd_failover'; - run_task 'uptimed'; - run_task 'httpd'; - run_task 'gemtexter'; - run_task 'taskwarrior'; - run_task 'acme'; - run_task 'acme_invoke'; - run_task 'inetd'; - run_task 'relayd'; - run_task 'smtpd'; - run_task 'rsync'; - run_task 'gogios'; - - # run_task 'gorum'; - run_task 'foostats'; - - # Requires installing the binaries first! - #run_task 'dtail'; - }; - -1; - -# vim: syntax=perl diff --git a/gemfeed/examples/conf/frontends/etc/acme-client.conf.tpl b/gemfeed/examples/conf/frontends/etc/acme-client.conf.tpl deleted file mode 100644 index b52f5b0e..00000000 --- a/gemfeed/examples/conf/frontends/etc/acme-client.conf.tpl +++ /dev/null @@ -1,41 +0,0 @@ -# -# $OpenBSD: acme-client.conf,v 1.4 2020/09/17 09:13:06 florian Exp $ -# -authority letsencrypt { - api url "https://acme-v02.api.letsencrypt.org/directory" - account key "/etc/acme/letsencrypt-privkey.pem" -} - -authority letsencrypt-staging { - api url "https://acme-staging-v02.api.letsencrypt.org/directory" - account key "/etc/acme/letsencrypt-staging-privkey.pem" -} - -authority buypass { - api url "https://api.buypass.com/acme/directory" - account key "/etc/acme/buypass-privkey.pem" - contact "mailto:me@example.com" -} - -authority buypass-test { - api url "https://api.test4.buypass.no/acme/directory" - account key "/etc/acme/buypass-test-privkey.pem" - contact "mailto:me@example.com" -} - -<% for my $host (@$acme_hosts) { -%> -<% for my $prefix ('', 'www.', 'standby.') { -%> -domain <%= $prefix.$host %> { - domain key "/etc/ssl/private/<%= $prefix.$host %>.key" - domain full chain certificate "/etc/ssl/<%= $prefix.$host %>.fullchain.pem" - sign with letsencrypt -} -<% } -%> -<% } -%> - -# For the server itself (e.g. TLS, or monitoring) -domain <%= "$hostname.$domain" %> { - domain key "/etc/ssl/private/<%= "$hostname.$domain" %>.key" - domain full chain certificate "/etc/ssl/<%= "$hostname.$domain" %>.fullchain.pem" - sign with letsencrypt -} diff --git a/gemfeed/examples/conf/frontends/etc/dserver/dtail.json.tpl b/gemfeed/examples/conf/frontends/etc/dserver/dtail.json.tpl deleted file mode 100644 index 6b96fbad..00000000 --- a/gemfeed/examples/conf/frontends/etc/dserver/dtail.json.tpl +++ /dev/null @@ -1,127 +0,0 @@ -{ - "Client": { - "TermColorsEnable": true, - "TermColors": { - "Remote": { - "DelimiterAttr": "Dim", - "DelimiterBg": "Blue", - "DelimiterFg": "Cyan", - "RemoteAttr": "Dim", - "RemoteBg": "Blue", - "RemoteFg": "White", - "CountAttr": "Dim", - "CountBg": "Blue", - "CountFg": "White", - "HostnameAttr": "Bold", - "HostnameBg": "Blue", - "HostnameFg": "White", - "IDAttr": "Dim", - "IDBg": "Blue", - "IDFg": "White", - "StatsOkAttr": "None", - "StatsOkBg": "Green", - "StatsOkFg": "Black", - "StatsWarnAttr": "None", - "StatsWarnBg": "Red", - "StatsWarnFg": "White", - "TextAttr": "None", - "TextBg": "Black", - "TextFg": "White" - }, - "Client": { - "DelimiterAttr": "Dim", - "DelimiterBg": "Yellow", - "DelimiterFg": "Black", - "ClientAttr": "Dim", - "ClientBg": "Yellow", - "ClientFg": "Black", - "HostnameAttr": "Dim", - "HostnameBg": "Yellow", - "HostnameFg": "Black", - "TextAttr": "None", - "TextBg": "Black", - "TextFg": "White" - }, - "Server": { - "DelimiterAttr": "AttrDim", - "DelimiterBg": "BgCyan", - "DelimiterFg": "FgBlack", - "ServerAttr": "AttrDim", - "ServerBg": "BgCyan", - "ServerFg": "FgBlack", - "HostnameAttr": "AttrBold", - "HostnameBg": "BgCyan", - "HostnameFg": "FgBlack", - "TextAttr": "AttrNone", - "TextBg": "BgBlack", - "TextFg": "FgWhite" - }, - "Common": { - "SeverityErrorAttr": "AttrBold", - "SeverityErrorBg": "BgRed", - "SeverityErrorFg": "FgWhite", - "SeverityFatalAttr": "AttrBold", - "SeverityFatalBg": "BgMagenta", - "SeverityFatalFg": "FgWhite", - "SeverityWarnAttr": "AttrBold", - "SeverityWarnBg": "BgBlack", - "SeverityWarnFg": "FgWhite" - }, - "MaprTable": { - "DataAttr": "AttrNone", - "DataBg": "BgBlue", - "DataFg": "FgWhite", - "DelimiterAttr": "AttrDim", - "DelimiterBg": "BgBlue", - "DelimiterFg": "FgWhite", - "HeaderAttr": "AttrBold", - "HeaderBg": "BgBlue", - "HeaderFg": "FgWhite", - "HeaderDelimiterAttr": "AttrDim", - "HeaderDelimiterBg": "BgBlue", - "HeaderDelimiterFg": "FgWhite", - "HeaderSortKeyAttr": "AttrUnderline", - "HeaderGroupKeyAttr": "AttrReverse", - "RawQueryAttr": "AttrDim", - "RawQueryBg": "BgBlack", - "RawQueryFg": "FgCyan" - } - } - }, - "Server": { - "SSHBindAddress": "0.0.0.0", - "HostKeyFile": "cache/ssh_host_key", - "HostKeyBits": 2048, - "MapreduceLogFormat": "default", - "MaxConcurrentCats": 2, - "MaxConcurrentTails": 50, - "MaxConnections": 50, - "MaxLineLength": 1048576, - "Permissions": { - "Default": [ - "readfiles:^/.*$" - ], - "Users": { - "paul": [ - "readfiles:^/.*$" - ], - "pbuetow": [ - "readfiles:^/.*$" - ], - "jamesblake": [ - "readfiles:^/tmp/foo.log$", - "readfiles:^/.*$", - "readfiles:!^/tmp/bar.log$" - ] - } - } - }, - "Common": { - "LogDir": "/var/log/dserver", - "Logger": "Fout", - "LogRotation": "Daily", - "CacheDir": "cache", - "SSHPort": 2222, - "LogLevel": "Info" - } -} diff --git a/gemfeed/examples/conf/frontends/etc/gogios.cron.tpl b/gemfeed/examples/conf/frontends/etc/gogios.cron.tpl deleted file mode 100644 index fc6299c3..00000000 --- a/gemfeed/examples/conf/frontends/etc/gogios.cron.tpl +++ /dev/null @@ -1,3 +0,0 @@ -0 7 * * * <%= $gogios_path %> -renotify >/dev/null -*/5 8-22 * * * -s <%= $gogios_path %> >/dev/null -0 3 * * 0 <%= $gogios_path %> -force >/dev/null diff --git a/gemfeed/examples/conf/frontends/etc/gogios.json.tpl b/gemfeed/examples/conf/frontends/etc/gogios.json.tpl deleted file mode 100644 index 683f9de8..00000000 --- a/gemfeed/examples/conf/frontends/etc/gogios.json.tpl +++ /dev/null @@ -1,98 +0,0 @@ -<% our $plugin_dir = '/usr/local/libexec/nagios'; -%> -{ - "EmailTo": "paul", - "EmailFrom": "gogios@mx.buetow.org", - "CheckTimeoutS": 10, - "CheckConcurrency": 3, - "StateDir": "/var/run/gogios", - "Checks": { - <% for my $host (qw(master standby)) { -%> - <% for my $proto (4, 6) { -%> - "Check Ping<%= $proto %> <%= $host %>.buetow.org": { - "Plugin": "<%= $plugin_dir %>/check_ping", - "Args": ["-H", "<%= $host %>.buetow.org", "-<%= $proto %>", "-w", "100,10%", "-c", "200,15%"], - "Retries": 3, - "RetryInterval": 3 - }, - <% } -%> - <% } -%> - <% for my $host (qw(fishfinger blowfish)) { -%> - "Check DTail <%= $host %>.buetow.org": { - "Plugin": "/usr/local/bin/dtailhealth", - "Args": ["--server", "<%= $host %>.buetow.org:2222"], - "DependsOn": ["Check Ping4 <%= $host %>.buetow.org", "Check Ping6 <%= $host %>.buetow.org"] - }, - <% } -%> - <% for my $host (qw(fishfinger blowfish)) { -%> - <% for my $proto (4, 6) { -%> - "Check Ping<%= $proto %> <%= $host %>.buetow.org": { - "Plugin": "<%= $plugin_dir %>/check_ping", - "Args": ["-H", "<%= $host %>.buetow.org", "-<%= $proto %>", "-w", "100,10%", "-c", "200,15%"], - "Retries": 3, - "RetryInterval": 3 - }, - <% } -%> - "Check TLS Certificate <%= $host %>.buetow.org": { - "Plugin": "<%= $plugin_dir %>/check_http", - "Args": ["--sni", "-H", "<%= $host %>.buetow.org", "-C", "20" ], - "DependsOn": ["Check Ping4 <%= $host %>.buetow.org", "Check Ping6 <%= $host %>.buetow.org"] - }, - <% } -%> - <% for my $host (@$acme_hosts) { -%> - <% for my $prefix ('', 'standby.', 'www.') { -%> - <% my $depends_on = $prefix eq 'standby.' ? 'standby.buetow.org' : 'master.buetow.org'; -%> - "Check TLS Certificate <%= $prefix . $host %>": { - "Plugin": "<%= $plugin_dir %>/check_http", - "Args": ["--sni", "-H", "<%= $prefix . $host %>", "-C", "20" ], - "DependsOn": ["Check Ping4 <%= $depends_on %>", "Check Ping6 <%= $depends_on %>"] - }, - <% for my $proto (4, 6) { -%> - "Check HTTP IPv<%= $proto %> <%= $prefix . $host %>": { - "Plugin": "<%= $plugin_dir %>/check_http", - "Args": ["<%= $prefix . $host %>", "-<%= $proto %>"], - "DependsOn": ["Check Ping<%= $proto %> <%= $depends_on %>"] - }, - <% } -%> - <% } -%> - <% } -%> - <% for my $host (qw(fishfinger blowfish)) { -%> - <% for my $proto (4, 6) { -%> - "Check Dig <%= $host %>.buetow.org IPv<%= $proto %>": { - "Plugin": "<%= $plugin_dir %>/check_dig", - "Args": ["-H", "<%= $host %>.buetow.org", "-l", "buetow.org", "-<%= $proto %>"], - "DependsOn": ["Check Ping<%= $proto %> <%= $host %>.buetow.org"] - }, - "Check SMTP <%= $host %>.buetow.org IPv<%= $proto %>": { - "Plugin": "<%= $plugin_dir %>/check_smtp", - "Args": ["-H", "<%= $host %>.buetow.org", "-<%= $proto %>"], - "DependsOn": ["Check Ping<%= $proto %> <%= $host %>.buetow.org"] - }, - "Check Gemini TCP <%= $host %>.buetow.org IPv<%= $proto %>": { - "Plugin": "<%= $plugin_dir %>/check_tcp", - "Args": ["-H", "<%= $host %>.buetow.org", "-p", "1965", "-<%= $proto %>"], - "DependsOn": ["Check Ping<%= $proto %> <%= $host %>.buetow.org"] - }, - <% } -%> - <% } -%> - "Check Users <%= $hostname %>": { - "Plugin": "<%= $plugin_dir %>/check_users", - "Args": ["-w", "2", "-c", "3"] - }, - "Check SWAP <%= $hostname %>": { - "Plugin": "<%= $plugin_dir %>/check_swap", - "Args": ["-w", "95%", "-c", "90%"] - }, - "Check Procs <%= $hostname %>": { - "Plugin": "<%= $plugin_dir %>/check_procs", - "Args": ["-w", "80", "-c", "100"] - }, - "Check Disk <%= $hostname %>": { - "Plugin": "<%= $plugin_dir %>/check_disk", - "Args": ["-w", "30%", "-c", "10%"] - }, - "Check Load <%= $hostname %>": { - "Plugin": "<%= $plugin_dir %>/check_load", - "Args": ["-w", "2,1,1", "-c", "4,3,3"] - } - } -} diff --git a/gemfeed/examples/conf/frontends/etc/gorum.json.tpl b/gemfeed/examples/conf/frontends/etc/gorum.json.tpl deleted file mode 100644 index 247a9dbf..00000000 --- a/gemfeed/examples/conf/frontends/etc/gorum.json.tpl +++ /dev/null @@ -1,18 +0,0 @@ -{ - "StateDir": "/var/run/gorum", - "Address": "<%= $hostname.'.'.$domain %>:4321", - "EmailTo": "", - "EmailFrom": "gorum@mx.buetow.org", - "Nodes": { - "Blowfish": { - "Hostname": "blowfish.buetow.org", - "Port": 4321, - "Priority": 100 - }, - "Fishfinger": { - "Hostname": "fishfinger.buetow.org", - "Port": 4321, - "Priority": 50 - } - } -} diff --git a/gemfeed/examples/conf/frontends/etc/httpd.conf.tpl b/gemfeed/examples/conf/frontends/etc/httpd.conf.tpl deleted file mode 100644 index c3a2764e..00000000 --- a/gemfeed/examples/conf/frontends/etc/httpd.conf.tpl +++ /dev/null @@ -1,184 +0,0 @@ -<% our @prefixes = ('', 'www.', 'standby.'); -%> -# Plain HTTP for ACME and HTTPS redirect -<% for my $host (@$acme_hosts) { for my $prefix (@prefixes) { -%> -server "<%= $prefix.$host %>" { - listen on * port 80 - log style forwarded - location "/.well-known/acme-challenge/*" { - root "/acme" - request strip 2 - } - location * { - block return 302 "https://$HTTP_HOST$REQUEST_URI" - } -} -<% } } -%> - -# Current server's FQDN (e.g. for mail server ACME cert requests) -server "<%= "$hostname.$domain" %>" { - listen on * port 80 - log style forwarded - location "/.well-known/acme-challenge/*" { - root "/acme" - request strip 2 - } - location * { - block return 302 "https://<%= "$hostname.$domain" %>" - } -} - -server "<%= "$hostname.$domain" %>" { - listen on * port 8080 - log style forwarded - location * { - root "/htdocs/buetow.org/self" - directory auto index - } -} - -# Gemtexter hosts -<% for my $host (qw/foo.zone stats.foo.zone/) { for my $prefix (@prefixes) { -%> -server "<%= $prefix.$host %>" { - listen on * port 8080 - log style forwarded - location "/.git*" { - block return 302 "https://<%= $prefix.$host %>" - } - location * { - <% if ($prefix eq 'www.') { -%> - block return 302 "https://<%= $host %>$REQUEST_URI" - <% } else { -%> - root "/htdocs/gemtexter/<%= $host %>" - directory auto index - <% } -%> - } -} -<% } } -%> - -# Redirect to paul.buetow.org -<% for my $prefix (@prefixes) { -%> -server "<%= $prefix %>buetow.org" { - listen on * port 8080 - log style forwarded - location * { - block return 302 "https://paul.buetow.org$REQUEST_URI" - } -} - -# Redirect blog to foo.zone -server "<%= $prefix %>blog.buetow.org" { - listen on * port 8080 - log style forwarded - location * { - block return 302 "https://foo.zone$REQUEST_URI" - } -} - -server "<%= $prefix %>snonux.foo" { - listen on * port 8080 - log style forwarded - location * { - block return 302 "https://foo.zone/about$REQUEST_URI" - } -} - -server "<%= $prefix %>paul.buetow.org" { - listen on * port 8080 - log style forwarded - location * { - block return 302 "https://foo.zone/about$REQUEST_URI" - } -} -<% } -%> - -# Redirect to gitub.dtail.dev -<% for my $prefix (@prefixes) { -%> -server "<%= $prefix %>dtail.dev" { - listen on * port 8080 - log style forwarded - location * { - block return 302 "https://github.dtail.dev$REQUEST_URI" - } -} -<% } -%> - -# Irregular Ninja special hosts -<% for my $prefix (@prefixes) { -%> -server "<%= $prefix %>irregular.ninja" { - listen on * port 8080 - log style forwarded - location * { - root "/htdocs/irregular.ninja" - directory auto index - } -} -<% } -%> - -<% for my $prefix (@prefixes) { -%> -server "<%= $prefix %>alt.irregular.ninja" { - listen on * port 8080 - log style forwarded - location * { - root "/htdocs/alt.irregular.ninja" - directory auto index - } -} -<% } -%> - -# joern special host -<% for my $prefix (@prefixes) { -%> -server "<%= $prefix %>joern.buetow.org" { - listen on * port 8080 - log style forwarded - location * { - root "/htdocs/joern/" - directory auto index - } -} -<% } -%> - -# Dory special host -<% for my $prefix (@prefixes) { -%> -server "<%= $prefix %>dory.buetow.org" { - listen on * port 8080 - log style forwarded - location * { - root "/htdocs/joern/dory.buetow.org" - directory auto index - } -} -<% } -%> - -# ecat special host -<% for my $prefix (@prefixes) { -%> -server "<%= $prefix %>ecat.buetow.org" { - listen on * port 8080 - log style forwarded - location * { - root "/htdocs/joern/ecat.buetow.org" - directory auto index - } -} -<% } -%> - -<% for my $prefix (@prefixes) { -%> -server "<%= $prefix %>fotos.buetow.org" { - listen on * port 8080 - log style forwarded - root "/htdocs/buetow.org/fotos" - directory auto index -} -<% } -%> - -# Defaults -server "default" { - listen on * port 80 - log style forwarded - block return 302 "https://foo.zone$REQUEST_URI" -} - -server "default" { - listen on * port 8080 - log style forwarded - block return 302 "https://foo.zone$REQUEST_URI" -} diff --git a/gemfeed/examples/conf/frontends/etc/inetd.conf b/gemfeed/examples/conf/frontends/etc/inetd.conf deleted file mode 100644 index 13163877..00000000 --- a/gemfeed/examples/conf/frontends/etc/inetd.conf +++ /dev/null @@ -1,2 +0,0 @@ -127.0.0.1:11965 stream tcp nowait www /usr/local/bin/vger vger -v -rsync stream tcp nowait root /usr/local/bin/rsync rsyncd --daemon diff --git a/gemfeed/examples/conf/frontends/etc/login.conf.d/inetd b/gemfeed/examples/conf/frontends/etc/login.conf.d/inetd deleted file mode 100644 index c8620c41..00000000 --- a/gemfeed/examples/conf/frontends/etc/login.conf.d/inetd +++ /dev/null @@ -1,3 +0,0 @@ -inetd:\ - :maxproc=10:\ - :tc=daemon: diff --git a/gemfeed/examples/conf/frontends/etc/mail/aliases b/gemfeed/examples/conf/frontends/etc/mail/aliases deleted file mode 100644 index 91bf1d06..00000000 --- a/gemfeed/examples/conf/frontends/etc/mail/aliases +++ /dev/null @@ -1,103 +0,0 @@ -# -# $OpenBSD: aliases,v 1.68 2020/01/24 06:17:37 tedu Exp $ -# -# Aliases in this file will NOT be expanded in the header from -# Mail, but WILL be visible over networks or from /usr/libexec/mail.local. -# -# >>>>>>>>>> The program "newaliases" must be run after -# >> NOTE >> this file is updated for any changes to -# >>>>>>>>>> show through to smtpd. -# - -# Basic system aliases -- these MUST be present -MAILER-DAEMON: postmaster -postmaster: root - -# General redirections for important pseudo accounts -daemon: root -ftp-bugs: root -operator: root -www: root -admin: root - -# Redirections for pseudo accounts that should not receive mail -_bgpd: /dev/null -_dhcp: /dev/null -_dpb: /dev/null -_dvmrpd: /dev/null -_eigrpd: /dev/null -_file: /dev/null -_fingerd: /dev/null -_ftp: /dev/null -_hostapd: /dev/null -_identd: /dev/null -_iked: /dev/null -_isakmpd: /dev/null -_iscsid: /dev/null -_ldapd: /dev/null -_ldpd: /dev/null -_mopd: /dev/null -_nsd: /dev/null -_ntp: /dev/null -_ospfd: /dev/null -_ospf6d: /dev/null -_pbuild: /dev/null -_pfetch: /dev/null -_pflogd: /dev/null -_ping: /dev/null -_pkgfetch: /dev/null -_pkguntar: /dev/null -_portmap: /dev/null -_ppp: /dev/null -_rad: /dev/null -_radiusd: /dev/null -_rbootd: /dev/null -_relayd: /dev/null -_ripd: /dev/null -_rstatd: /dev/null -_rusersd: /dev/null -_rwalld: /dev/null -_smtpd: /dev/null -_smtpq: /dev/null -_sndio: /dev/null -_snmpd: /dev/null -_spamd: /dev/null -_switchd: /dev/null -_syslogd: /dev/null -_tcpdump: /dev/null -_traceroute: /dev/null -_tftpd: /dev/null -_unbound: /dev/null -_unwind: /dev/null -_vmd: /dev/null -_x11: /dev/null -_ypldap: /dev/null -bin: /dev/null -build: /dev/null -nobody: /dev/null -_tftp_proxy: /dev/null -_ftp_proxy: /dev/null -_sndiop: /dev/null -_syspatch: /dev/null -_slaacd: /dev/null -sshd: /dev/null - -# Well-known aliases -- these should be filled in! -root: paul -manager: root -dumper: root - -# RFC 2142: NETWORK OPERATIONS MAILBOX NAMES -abuse: root -noc: root -security: root - -# RFC 2142: SUPPORT MAILBOX NAMES FOR SPECIFIC INTERNET SERVICES -hostmaster: root -# usenet: root -# news: usenet -webmaster: root -# ftp: root - -paul: paul.buetow@protonmail.com -albena: albena.buetow@protonmail.com diff --git a/gemfeed/examples/conf/frontends/etc/mail/smtpd.conf.tpl b/gemfeed/examples/conf/frontends/etc/mail/smtpd.conf.tpl deleted file mode 100644 index 7764b345..00000000 --- a/gemfeed/examples/conf/frontends/etc/mail/smtpd.conf.tpl +++ /dev/null @@ -1,23 +0,0 @@ -# This is the smtpd server system-wide configuration file. -# See smtpd.conf(5) for more information. - -# I used https://www.checktls.com/TestReceiver for testing. - -pki "buetow_org_tls" cert "/etc/ssl/<%= "$hostname.$domain" %>.fullchain.pem" -pki "buetow_org_tls" key "/etc/ssl/private/<%= "$hostname.$domain" %>.key" - -table aliases file:/etc/mail/aliases -table virtualdomains file:/etc/mail/virtualdomains -table virtualusers file:/etc/mail/virtualusers - -listen on socket -listen on all tls pki "buetow_org_tls" hostname "<%= "$hostname.$domain" %>" -#listen on all - -action localmail mbox alias <aliases> -action receive mbox virtual <virtualusers> -action outbound relay - -match from any for domain <virtualdomains> action receive -match from local for local action localmail -match from local for any action outbound diff --git a/gemfeed/examples/conf/frontends/etc/mail/virtualusers b/gemfeed/examples/conf/frontends/etc/mail/virtualusers deleted file mode 100644 index 6cfac58b..00000000 --- a/gemfeed/examples/conf/frontends/etc/mail/virtualusers +++ /dev/null @@ -1,5 +0,0 @@ -albena@buetow.org albena.buetow@protonmail.com -joern@buetow.org df2hbradio@gmail.com -dory@buetow.org df2hbradio@gmail.com -ecat@buetow.org df2hbradio@gmail.com -@ paul.buetow@protonmail.com diff --git a/gemfeed/examples/conf/frontends/etc/myname.tpl b/gemfeed/examples/conf/frontends/etc/myname.tpl deleted file mode 100644 index dcd4ca04..00000000 --- a/gemfeed/examples/conf/frontends/etc/myname.tpl +++ /dev/null @@ -1 +0,0 @@ -<%= $fqdns->($vio0_ip) %> diff --git a/gemfeed/examples/conf/frontends/etc/newsyslog.conf b/gemfeed/examples/conf/frontends/etc/newsyslog.conf deleted file mode 100644 index bbd1aa55..00000000 --- a/gemfeed/examples/conf/frontends/etc/newsyslog.conf +++ /dev/null @@ -1,14 +0,0 @@ -# logfile_name owner:group mode count size when flags -/var/cron/log root:wheel 600 3 10 * Z -/var/log/authlog root:wheel 640 7 * 168 Z -/var/log/daemon 640 14 300 * Z -/var/log/lpd-errs 640 7 10 * Z -/var/log/maillog 640 7 * 24 Z -/var/log/messages 644 5 300 * Z -/var/log/secure 600 7 * 168 Z -/var/log/wtmp 644 7 * $M1D4 B "" -/var/log/xferlog 640 7 250 * Z -/var/log/pflog 600 3 250 * ZB "pkill -HUP -u root -U root -t - -x pflogd" -/var/www/logs/access.log 644 14 * $W0 Z "pkill -USR1 -u root -U root -x httpd" -/var/www/logs/error.log 644 7 250 * Z "pkill -USR1 -u root -U root -x httpd" -/var/log/fooodds 640 7 300 * Z diff --git a/gemfeed/examples/conf/frontends/etc/rc.conf.local b/gemfeed/examples/conf/frontends/etc/rc.conf.local deleted file mode 100644 index 842f16d7..00000000 --- a/gemfeed/examples/conf/frontends/etc/rc.conf.local +++ /dev/null @@ -1,5 +0,0 @@ -httpd_flags= -inetd_flags= -nsd_flags= -pkg_scripts="uptimed httpd" -relayd_flags= diff --git a/gemfeed/examples/conf/frontends/etc/rc.d/dserver.tpl b/gemfeed/examples/conf/frontends/etc/rc.d/dserver.tpl deleted file mode 100755 index aec80f54..00000000 --- a/gemfeed/examples/conf/frontends/etc/rc.d/dserver.tpl +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/ksh - -daemon="/usr/local/bin/dserver" -daemon_flags="-cfg /etc/dserver/dtail.json" -daemon_user="_dserver" - -. /etc/rc.d/rc.subr - -rc_reload=NO - -rc_pre() { - install -d -o _dserver /var/log/dserver - install -d -o _dserver /var/run/dserver/cache -} - -rc_cmd $1 & diff --git a/gemfeed/examples/conf/frontends/etc/rc.d/gorum.tpl b/gemfeed/examples/conf/frontends/etc/rc.d/gorum.tpl deleted file mode 100755 index 3b4f403d..00000000 --- a/gemfeed/examples/conf/frontends/etc/rc.d/gorum.tpl +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/ksh - -daemon="/usr/local/bin/gorum" -daemon_flags="-cfg /etc/gorum.json" -daemon_user="_gorum" -daemon_logger="daemon.info" - -. /etc/rc.d/rc.subr - -rc_reload=NO - -rc_pre() { - install -d -o _gorum /var/log/gorum -} - -rc_cmd $1 & diff --git a/gemfeed/examples/conf/frontends/etc/relayd.conf.tpl b/gemfeed/examples/conf/frontends/etc/relayd.conf.tpl deleted file mode 100644 index 1900c0bf..00000000 --- a/gemfeed/examples/conf/frontends/etc/relayd.conf.tpl +++ /dev/null @@ -1,86 +0,0 @@ -<% our @prefixes = ('', 'www.', 'standby.'); -%> -log connection - -# Wireguard endpoints of the k3s cluster nodes running in FreeBSD bhyve Linux VMs via Wireguard tunnels -table <f3s> { - 192.168.2.120 - 192.168.2.121 - 192.168.2.122 -} - -# Same backends, separate table for registry service on port 30001 -table <f3s_registry> { - 192.168.2.120 - 192.168.2.121 - 192.168.2.122 -} - -# Local OpenBSD httpd -table <localhost> { - 127.0.0.1 - ::1 -} - -http protocol "https" { - <% for my $host (@$acme_hosts) { for my $prefix (@prefixes) { -%> - tls keypair <%= $prefix.$host -%> - <% } } -%> - tls keypair <%= $hostname.'.'.$domain -%> - - match request header set "X-Forwarded-For" value "$REMOTE_ADDR" - match request header set "X-Forwarded-Proto" value "https" - - # WebSocket support for audiobookshelf - pass header "Connection" - pass header "Upgrade" - pass header "Sec-WebSocket-Key" - pass header "Sec-WebSocket-Version" - pass header "Sec-WebSocket-Extensions" - pass header "Sec-WebSocket-Protocol" - - <% for my $host (@$f3s_hosts) { for my $prefix (@prefixes) { -%> - <% if ($host eq 'registry.f3s.buetow.org') { -%> - match request quick header "Host" value "<%= $prefix.$host -%>" forward to <f3s_registry> - <% } else { -%> - match request quick header "Host" value "<%= $prefix.$host -%>" forward to <f3s> - <% } } } -%> -} - -relay "https4" { - listen on <%= $vio0_ip %> port 443 tls - protocol "https" - forward to <localhost> port 8080 - forward to <f3s_registry> port 30001 check tcp - forward to <f3s> port 80 check tcp -} - -relay "https6" { - listen on <%= $ipv6address->($hostname) %> port 443 tls - protocol "https" - forward to <localhost> port 8080 - forward to <f3s_registry> port 30001 check tcp - forward to <f3s> port 80 check tcp -} - -tcp protocol "gemini" { - tls keypair foo.zone - tls keypair stats.foo.zone - tls keypair snonux.foo - tls keypair paul.buetow.org - tls keypair standby.foo.zone - tls keypair standby.stats.foo.zone - tls keypair standby.snonux.foo - tls keypair standby.paul.buetow.org -} - -relay "gemini4" { - listen on <%= $vio0_ip %> port 1965 tls - protocol "gemini" - forward to 127.0.0.1 port 11965 -} - -relay "gemini6" { - listen on <%= $ipv6address->($hostname) %> port 1965 tls - protocol "gemini" - forward to 127.0.0.1 port 11965 -} diff --git a/gemfeed/examples/conf/frontends/etc/rsyncd.conf.tpl b/gemfeed/examples/conf/frontends/etc/rsyncd.conf.tpl deleted file mode 100644 index e9fe3cf8..00000000 --- a/gemfeed/examples/conf/frontends/etc/rsyncd.conf.tpl +++ /dev/null @@ -1,28 +0,0 @@ -<% my $allow = '*.wg0.wan.buetow.org,*.wg0,localhost'; %> -max connections = 5 -timeout = 300 - -[joernshtdocs] -comment = Joerns htdocs -path = /var/www/htdocs/joern -read only = yes -list = yes -uid = www -gid = www -hosts allow = <%= $allow %> - -# [publicgemini] -# comment = Public Gemini capsule content -# path = /var/gemini -# read only = yes -# list = yes -# uid = www -# gid = www -# hosts allow = <%= $allow %> - -# [sslcerts] -# comment = TLS certificates -# path = /etc/ssl -# read only = yes -# list = yes -# hosts allow = <%= $allow %> diff --git a/gemfeed/examples/conf/frontends/etc/taskrc.tpl b/gemfeed/examples/conf/frontends/etc/taskrc.tpl deleted file mode 100644 index ed97d385..00000000 --- a/gemfeed/examples/conf/frontends/etc/taskrc.tpl +++ /dev/null @@ -1,40 +0,0 @@ -# [Created by task 2.6.2 7/9/2023 20:52:31] -# Taskwarrior program configuration file. -# For more documentation, see https://taskwarrior.org or try 'man task', 'man task-color', -# 'man task-sync' or 'man taskrc' - -# Here is an example of entries that use the default, override and blank values -# variable=foo -- By specifying a value, this overrides the default -# variable= -- By specifying no value, this means no default -# #variable=foo -- By commenting out the line, or deleting it, this uses the default - -# You can also refence environment variables: -# variable=$HOME/task -# variable=$VALUE - -# Use the command 'task show' to see all defaults and overrides - -# Files -data.location=/home/git/.task - -# To use the default location of the XDG directories, -# move this configuration file from ~/.taskrc to ~/.config/task/taskrc and uncomment below - -#data.location=~/.local/share/task -#hooks.location=~/.config/task/hooks - -# Color theme (uncomment one to use) -#include light-16.theme -#include light-256.theme -#include dark-16.theme -#include dark-256.theme -#include dark-red-256.theme -#include dark-green-256.theme -#include dark-blue-256.theme -#include dark-violets-256.theme -#include dark-yellow-green.theme -#include dark-gray-256.theme -#include dark-gray-blue-256.theme -#include solarized-dark-256.theme -#include solarized-light-256.theme -#include no-color.theme diff --git a/gemfeed/examples/conf/frontends/etc/tmux.conf b/gemfeed/examples/conf/frontends/etc/tmux.conf deleted file mode 100644 index 14493260..00000000 --- a/gemfeed/examples/conf/frontends/etc/tmux.conf +++ /dev/null @@ -1,24 +0,0 @@ -set-option -g allow-rename off -set-option -g default-terminal "screen-256color" -set-option -g history-limit 100000 -set-option -g status-bg '#444444' -set-option -g status-fg '#ffa500' - -set-window-option -g mode-keys vi - -bind-key h select-pane -L -bind-key j select-pane -D -bind-key k select-pane -U -bind-key l select-pane -R - -bind-key H resize-pane -L 5 -bind-key J resize-pane -D 5 -bind-key K resize-pane -U 5 -bind-key L resize-pane -R 5 - -bind-key b break-pane -d -bind-key c new-window -c '#{pane_current_path}' -bind-key p setw synchronize-panes off -bind-key P setw synchronize-panes on -bind-key r source-file ~/.tmux.conf \; display-message "~/.tmux.conf reloaded" -bind-key T choose-tree diff --git a/gemfeed/examples/conf/frontends/scripts/acme.sh.tpl b/gemfeed/examples/conf/frontends/scripts/acme.sh.tpl deleted file mode 100644 index 8d306092..00000000 --- a/gemfeed/examples/conf/frontends/scripts/acme.sh.tpl +++ /dev/null @@ -1,68 +0,0 @@ -#!/bin/sh - -MY_IP=`ifconfig vio0 | awk '$1 == "inet" { print $2 }'` - -# New hosts may not have a cert, just copy foo.zone as a -# placeholder, so that services can at least start proprely. -# cert will be updated with next acme-client runs! -ensure_placeholder_cert () { - host=$1 - copy_from=foo.zone - - if [ ! -f /etc/ssl/$host.crt ]; then - cp -v /etc/ssl/$copy_from.crt /etc/ssl/$host.crt - cp -v /etc/ssl/$copy_from.fullchain.pem /etc/ssl/$host.fullchain.pem - cp -v /etc/ssl/private/$copy_from.key /etc/ssl/private/$host.key - fi -} - -handle_cert () { - host=$1 - host_ip=`host $host | awk '/has address/ { print $(NF) }'` - - grep -q "^server \"$host\"" /etc/httpd.conf - if [ $? -ne 0 ]; then - echo "Host $host not configured in httpd, skipping..." - return - fi - ensure_placeholder_cert "$host" - - if [ "$MY_IP" != "$host_ip" ]; then - echo "Not serving $host, skipping..." - return - fi - - # Create symlink, so that relayd also can read it. - crt_path=/etc/ssl/$host - if [ -e $crt_path.crt ]; then - rm $crt_path.crt - fi - ln -s $crt_path.fullchain.pem $crt_path.crt - # Requesting and renewing certificate. - /usr/sbin/acme-client -v $host -} - -has_update=no -<% for my $host (@$acme_hosts) { -%> -<% for my $prefix ('', 'www.', 'standby.') { -%> -handle_cert <%= $prefix.$host %> -if [ $? -eq 0 ]; then - has_update=yes -fi -<% } -%> -<% } -%> - -# Current server's FQDN (e.g. for mail server certs) -handle_cert <%= "$hostname.$domain" %> -if [ $? -eq 0 ]; then - has_update=yes -fi - -# Pick up the new certs. -if [ $has_update = yes ]; then - # TLS offloading fully moved to relayd now - # /usr/sbin/rcctl reload httpd - - /usr/sbin/rcctl reload relayd - /usr/sbin/rcctl restart smtpd -fi diff --git a/gemfeed/examples/conf/frontends/scripts/dns-failover.ksh b/gemfeed/examples/conf/frontends/scripts/dns-failover.ksh deleted file mode 100644 index dfc24ee3..00000000 --- a/gemfeed/examples/conf/frontends/scripts/dns-failover.ksh +++ /dev/null @@ -1,133 +0,0 @@ -#!/bin/ksh - -ZONES_DIR=/var/nsd/zones/master/ -DEFAULT_MASTER=fishfinger.buetow.org -DEFAULT_STANDBY=blowfish.buetow.org - -determine_master_and_standby () { - local master=$DEFAULT_MASTER - local standby=$DEFAULT_STANDBY - - # Weekly auto-failover for Let's Encrypt automation - local -i -r week_of_the_year=$(date +%U) - if [ $(( week_of_the_year % 2 )) -ne 0 ]; then - local tmp=$master - master=$standby - standby=$tmp - fi - - local -i health_ok=1 - if ! ftp -4 -o - https://$master/index.txt | grep -q "Welcome to $master"; then - echo "https://$master/index.txt IPv4 health check failed" - health_ok=0 - elif ! ftp -6 -o - https://$master/index.txt | grep -q "Welcome to $master"; then - echo "https://$master/index.txt IPv6 health check failed" - health_ok=0 - fi - - if [ $health_ok -eq 0 ]; then - local tmp=$master - master=$standby - standby=$tmp - fi - - echo "Master is $master, standby is $standby" - - host $master | awk '/has address/ { print $(NF) }' >/var/nsd/run/master_a - host $master | awk '/has IPv6 address/ { print $(NF) }' >/var/nsd/run/master_aaaa - host $standby | awk '/has address/ { print $(NF) }' >/var/nsd/run/standby_a - host $standby | awk '/has IPv6 address/ { print $(NF) }' >/var/nsd/run/standby_aaaa -} - -transform () { - sed -E ' - /IN A .*; Enable failover/ { - /^standby/! { - s/^(.*) 300 IN A (.*) ; (.*)/\1 300 IN A '$(cat /var/nsd/run/master_a)' ; \3/; - } - /^standby/ { - s/^(.*) 300 IN A (.*) ; (.*)/\1 300 IN A '$(cat /var/nsd/run/standby_a)' ; \3/; - } - } - /IN AAAA .*; Enable failover/ { - /^standby/! { - s/^(.*) 300 IN AAAA (.*) ; (.*)/\1 300 IN AAAA '$(cat /var/nsd/run/master_aaaa)' ; \3/; - } - /^standby/ { - s/^(.*) 300 IN AAAA (.*) ; (.*)/\1 300 IN AAAA '$(cat /var/nsd/run/standby_aaaa)' ; \3/; - } - } - / ; serial/ { - s/^( +) ([0-9]+) .*; (.*)/\1 '$(date +%s)' ; \3/; - } - ' -} - -zone_is_ok () { - local -r zone=$1 - local -r domain=${zone%.zone} - dig $domain @localhost | grep -q "$domain.*IN.*NS" -} - -failover_zone () { - local -r zone_file=$1 - local -r zone=$(basename $zone_file) - - # Race condition (e.g. script execution abored in the middle previous run) - if [ -f $zone_file.bak ]; then - mv $zone_file.bak $zone_file - fi - - cat $zone_file | transform > $zone_file.new.tmp - - grep -v ' ; serial' $zone_file.new.tmp > $zone_file.new.noserial.tmp - grep -v ' ; serial' $zone_file > $zone_file.old.noserial.tmp - - echo "Has zone $zone_file changed?" - if diff -u $zone_file.old.noserial.tmp $zone_file.new.noserial.tmp; then - echo "The zone $zone_file hasn't changed" - rm $zone_file.*.tmp - return 0 - fi - - cp $zone_file $zone_file.bak - mv $zone_file.new.tmp $zone_file - rm $zone_file.*.tmp - echo "Reloading nsd" - nsd-control reload - - if ! zone_is_ok $zone; then - echo "Rolling back $zone_file changes" - cp $zone_file $zone_file.invalid - mv $zone_file.bak $zone_file - echo "Reloading nsd" - nsd-control reload - zone_is_ok $zone - return 3 - fi - - for cleanup in invalid bak; do - if [ -f $zone_file.$cleanup ]; then - rm $zone_file.$cleanup - fi - done - - echo "Failover of zone $zone to $MASTER completed" - return 1 -} - -main () { - determine_master_and_standby - - local -i ec=0 - for zone_file in $ZONES_DIR/*.zone; do - if ! failover_zone $zone_file; then - ec=1 - fi - done - - # ec other than 0: CRON will send out an E-Mail. - exit $ec -} - -main diff --git a/gemfeed/examples/conf/frontends/scripts/dserver-update-key-cache.sh.tpl b/gemfeed/examples/conf/frontends/scripts/dserver-update-key-cache.sh.tpl deleted file mode 100644 index 86b5ecf9..00000000 --- a/gemfeed/examples/conf/frontends/scripts/dserver-update-key-cache.sh.tpl +++ /dev/null @@ -1,34 +0,0 @@ -#!/bin/ksh - -CACHEDIR=/var/run/dserver/cache -DSERVER_USER=_dserver -DSERVER_GROUP=_dserver - -echo 'Updating SSH key cache' - -ls /home/ | while read remoteuser; do - keysfile=/home/$remoteuser/.ssh/authorized_keys - - if [ -f $keysfile ]; then - cachefile=$CACHEDIR/$remoteuser.authorized_keys - echo "Caching $keysfile -> $cachefile" - - cp $keysfile $cachefile - chown $DSERVER_USER:$DSERVER_GROUP $cachefile - chmod 600 $cachefile - fi -done - -# Cleanup obsolete public SSH keys -find $CACHEDIR -name \*.authorized_keys -type f | -while read cachefile; do - remoteuser=$(basename $cachefile | cut -d. -f1) - keysfile=/home/$remoteuser/.ssh/authorized_keys - - if [ ! -f $keysfile ]; then - echo 'Deleting obsolete cache file $cachefile' - rm $cachefile - fi -done - -echo 'All set...' diff --git a/gemfeed/examples/conf/frontends/scripts/fooodds.txt b/gemfeed/examples/conf/frontends/scripts/fooodds.txt deleted file mode 100644 index 0e08bdd1..00000000 --- a/gemfeed/examples/conf/frontends/scripts/fooodds.txt +++ /dev/null @@ -1,191 +0,0 @@ -% -+ -.. -/actuator -/actuator/health -/admin -/ajax -alfacgiapi -/ALFA_DATA -/api -/apply.cgi -/ARest1.exe -.asp -/aspera -/assets -/audiobookshelf -/auth -/autodiscover -/.aws -/bac -/back -/backup -/bak -/base -/.bash_history -/bf -/bin -/bin/sh -/bk -/bkp -/blog -/blurs -/boaform -/boafrm -/.bod -/Br7q -/british-airways -/buetow.org.zip -/buetow.zip -/burodecredito -/c -/.cache -/ccaguardians -/cdn-cgi -/centralbankthailand -/cfdump.packetsdatabase.com -/charlesbridge -/check.txt -/cimtechsolutions -/.circleci -/c/k2 -/ckfinder -/client.zip -/cloud-config.yml -/cloudflare.com -/clssettlement -/cmd,/simZysh/register_main/setCookie -/cn/cmd -/codeberg -/CODE_OF_CONDUCT.md -/columbiagas -/common_page -/comp -/concerto -/config -/config.json -/config.xml -/Config.xml -/config.yaml -/config.yml -/connectivitycheck.gstatic.com -/connector.sds -/console -/contact-information.html -/contact-us -/containers -/CONTRIBUTING.md -/credentials.txt -/crivo -/current_config -/cwservices -/daAV -/dana-cached -/dana-na -/database_backup.sql -/.database.bak -/database.sql -/data.zip -/db -/debug -/debug.cgi -/decoherence-is-just-realizing-this -/demo -/developmentserver -/directory.gz -/directory.tar -/directory.zip -/dir.html -/DnHb -/dns-query -docker-compose -/docker-compose.yml -/?document=images -/Dorybau2.html -/Dorybau.html -/dory.buetow.org -/download -/DpbF -/druid -/dtail.dev.gz -/dtail.dev.sql -/dtail.dev.tar.gz -/dtail.dev.zip -/dtail.html -/dtail.zip -/dump.sql -/dvQ1 -/dvr/cmd -/edualy-shammin -/ekggho -.env -/epa -/etc -/eW9h -/ews -/F3to -/f3Yk -/fahrzeugtechnik.fh-joanneum.at -/failedbythefos -/features -/federalhomeloanbankofdesmoines -/fhir -/fhir-server -/file-manager -/files -/files.zip -/firstfinancial -/flash -/flower -/foostats -/footlocker -/foo.zip -/foo.zone.bz2 -/foozone.webp -/foo.zone.zip -/form.html -/freeze.na4u.ru -/frontend.zip -/ftpsync.settings -/full_backup.zip -/FvwmRearrange.png -/gdb.pdf -/geoserver -.git -/git-guides -/global-protect -/gm-donate.net -/GMUs -/goform -/google.com -/GoRU -/GponForm -/helpdesk -/high-noise-level-for-that-earth-day-with-colors-gay -/his-viewpoint-is-not-economics-until-they-harden -/hN6p -HNAP1 -/hp -/_ignition -jndi:ldap -.js -.lua -microsoft.exchange -/owa/ -.php -/phpinfo -phpunit -/portal/redlion -/_profiler -.rar -/RDWeb -robots.txt -/SDK -/sitemap.xml -/sites -.sql -/ueditor -/vendor -@vite -wordpress -/wp diff --git a/gemfeed/examples/conf/frontends/scripts/foostats.pl b/gemfeed/examples/conf/frontends/scripts/foostats.pl deleted file mode 100644 index a440d941..00000000 --- a/gemfeed/examples/conf/frontends/scripts/foostats.pl +++ /dev/null @@ -1,1910 +0,0 @@ -#!/usr/bin/perl - -use v5.38; - -# Those are enabled automatically now w/ this version of Perl -# use strict; -# use warnings; - -use builtin qw(true false); -use experimental qw(builtin); - -use feature qw(refaliasing); -no warnings qw(experimental::refaliasing); - -# Debugging aids like diagnostics are noisy in production. -# Removed per review: enable locally when debugging only. - -use constant VERSION => 'v0.1.0'; - -# Package: FileHelper — small file/JSON helpers -# - Purpose: Atomic writes, gzip JSON read/write, and line reading. -# - Notes: Dies on I/O errors; JSON encoding uses core JSON. -package FileHelper { - use JSON; - - # Sub: write - # - Purpose: Atomic write to a file via "$path.tmp" and rename. - # - Params: $path (str) destination; $content (str) contents to write. - # - Return: undef; dies on failure. - sub write ($path, $content) { - open my $fh, '>', "$path.tmp" or die "\nCannot open file: $!"; - print $fh $content; - close $fh; - rename "$path.tmp", $path; - } - - # Sub: write_json_gz - # - Purpose: JSON-encode $data and write it gzipped atomically. - # - Params: $path (str) destination path; $data (ref/scalar) Perl data. - # - Return: undef; dies on failure. - sub write_json_gz ($path, $data) { - my $json = encode_json $data; - - say "Writing $path"; - open my $fd, '>:gzip', "$path.tmp" or die "$path.tmp: $!"; - print $fd $json; - close $fd; - - rename "$path.tmp", $path or die "$path.tmp: $!"; - } - - # Sub: read_json_gz - # - Purpose: Read a gzipped JSON file and decode to Perl data. - # - Params: $path (str) path to .json.gz file. - # - Return: Perl data structure. - sub read_json_gz ($path) { - say "Reading $path"; - open my $fd, '<:gzip', $path or die "$path: $!"; - my $json = decode_json <$fd>; - close $fd; - return $json; - } - - # Sub: read_lines - # - Purpose: Slurp file lines and chomp newlines. - # - Params: $path (str) file path. - # - Return: list of lines (no trailing newlines). - sub read_lines ($path) { - my @lines; - open(my $fh, '<', $path) or die "$path: $!"; - chomp(@lines = <$fh>); - close($fh); - return @lines; - } -} - -# Package: DateHelper — date range helpers -# - Purpose: Produce date strings used for report windows. -# - Format: Dates are returned as YYYYMMDD strings. -package DateHelper { - use Time::Piece; - - # Sub: last_month_dates - # - Purpose: Return dates for today back to 30 days ago (inclusive). - # - Params: none. - # - Return: list of YYYYMMDD strings, newest first. - sub last_month_dates () { - my $today = localtime; - my @dates; - - for my $days_ago (1 .. 31) { - my $date = $today - ($days_ago * 24 * 60 * 60); - push @dates, $date->strftime('%Y%m%d'); - } - - return @dates; - } - -} - -# Package: Foostats::Logreader — parse and normalize logs -# - Purpose: Read web and gemini logs, anonymize IPs, and emit normalized events. -# - Output Event: { proto, host, ip_hash, ip_proto, date, time, uri_path, status } -package Foostats::Logreader { - use Digest::SHA3 'sha3_512_base64'; - use File::stat; - use PerlIO::gzip; - use Time::Piece; - use String::Util qw(contains startswith endswith); - - # Make log locations configurable (env overrides) to enable testing. - # Sub: gemini_logs_glob - # - Purpose: Glob for gemini-related logs; env override for testing. - # - Return: glob pattern string. - sub gemini_logs_glob { $ENV{FOOSTATS_GEMINI_LOGS_GLOB} // '/var/log/daemon*' } - - # Sub: web_logs_glob - # - Purpose: Glob for web access logs; env override for testing. - # - Return: glob pattern string. - sub web_logs_glob { $ENV{FOOSTATS_WEB_LOGS_GLOB} // '/var/www/logs/access.log*' } - - # Sub: anonymize_ip - # - Purpose: Classify IPv4/IPv6 and map IP to a stable SHA3-512 base64 hash. - # - Params: $ip (str) source IP. - # - Return: ($hash, $proto) where $proto is 'IPv4' or 'IPv6'. - sub anonymize_ip ($ip) { - my $ip_proto = contains($ip, ':') ? 'IPv6' : 'IPv4'; - my $ip_hash = sha3_512_base64 $ip; - return ($ip_hash, $ip_proto); - } - - # Sub: read_lines - # - Purpose: Iterate files matching glob by age; invoke $cb for each line. - # - Params: $glob (str) file glob; $cb (code) callback ($year, @fields). - # - Return: undef; stops early if callback returns undef for a file. - sub read_lines ($glob, $cb) { - my sub year ($path) { - localtime((stat $path)->mtime)->strftime('%Y'); - } - - my sub open_file ($path) { - my $flag = $path =~ /\.gz$/ ? '<:gzip' : '<'; - open my $fd, $flag, $path or die "$path: $!"; - return $fd; - } - - my $last = false; - say 'File path glob matches: ' . join(' ', glob $glob); - - LAST: - for my $path (sort { -M $a <=> -M $b } glob $glob) { - say "Processing $path"; - - my $file = open_file $path; - my $year = year $file; - - while (<$file>) { - next if contains($_, 'logfile turned over'); - - # last == true means: After this file, don't process more - $last = true unless defined $cb->($year, split / +/); - } - - say "Closing $path (last:$last)"; - close $file; - last LAST if $last; - } - } - - # Sub: parse_web_logs - # - Purpose: Parse web log lines into normalized events and pass to callback. - # - Params: $last_processed_date (YYYYMMDD int) lower bound; $cb (code) event consumer. - # - Return: undef. - sub parse_web_logs ($last_processed_date, $cb) { - my sub parse_date ($date) { - my $t = Time::Piece->strptime($date, '[%d/%b/%Y:%H:%M:%S'); - return ($t->strftime('%Y%m%d'), $t->strftime('%H%M%S')); - } - - my sub parse_web_line (@line) { - my ($date, $time) = parse_date $line [4]; - return undef if $date < $last_processed_date; - - # X-Forwarded-For? - my $ip = $line[-2] eq '-' ? $line[1] : $line[-2]; - my ($ip_hash, $ip_proto) = anonymize_ip $ip; - - return { - proto => 'web', - host => $line[0], - ip_hash => $ip_hash, - ip_proto => $ip_proto, - date => $date, - time => $time, - uri_path => $line[7], - status => $line[9], - }; - } - - read_lines web_logs_glob(), sub ($year, @line) { - $cb->(parse_web_line @line); - }; - } - - # Sub: parse_gemini_logs - # - Purpose: Parse vger/relayd lines, merge paired entries, and emit events. - # - Params: $last_processed_date (YYYYMMDD int); $cb (code) event consumer. - # - Return: undef. - sub parse_gemini_logs ($last_processed_date, $cb) { - my sub parse_date ($year, @line) { - my $timestr = "$line[0] $line[1]"; - return Time::Piece->strptime($timestr, '%b %d')->strftime("$year%m%d"); - } - - my sub parse_vger_line ($year, @line) { - my $full_path = $line[5]; - $full_path =~ s/"//g; - my ($proto, undef, $host, $uri_path) = split '/', $full_path, 4; - $uri_path = '' unless defined $uri_path; - - return { - proto => 'gemini', - host => $host, - uri_path => "/$uri_path", - status => $line[6], - date => int(parse_date($year, @line)), - time => $line[2], - }; - } - - my sub parse_relayd_line ($year, @line) { - my $date = int(parse_date($year, @line)); - - my ($ip_hash, $ip_proto) = anonymize_ip $line [12]; - return { - ip_hash => $ip_hash, - ip_proto => $ip_proto, - date => $date, - time => $line[2], - }; - } - - # Expect one vger and one relayd log line per event! So collect - # both events (one from one log line each) and then merge the result hash! - my ($vger, $relayd); - read_lines gemini_logs_glob(), sub ($year, @line) { - if ($line[4] eq 'vger:') { - $vger = parse_vger_line $year, @line; - } - elsif ($line[5] eq 'relay' and startswith($line[6], 'gemini')) { - $relayd = parse_relayd_line $year, @line; - return undef - if $relayd->{date} < $last_processed_date; - } - - if (defined $vger and defined $relayd and $vger->{time} eq $relayd->{time}) { - $cb->({ %$vger, %$relayd }); - $vger = $relayd = undef; - } - - true; - }; - } - - # Sub: parse_logs - # - Purpose: Coordinate parsing for both web and gemini, aggregating into stats. - # - Params: $last_web_date, $last_gemini_date (YYYYMMDD int), $odds_file, $odds_log. - # - Return: stats hashref keyed by "proto_YYYYMMDD". - sub parse_logs ($last_web_date, $last_gemini_date, $odds_file, $odds_log) { - my $agg = Foostats::Aggregator->new($odds_file, $odds_log); - - say "Last web date: $last_web_date"; - say "Last gemini date: $last_gemini_date"; - - parse_web_logs $last_web_date, sub ($event) { - $agg->add($event); - }; - parse_gemini_logs $last_gemini_date, sub ($event) { - $agg->add($event); - }; - - return $agg->{stats}; - } -} - -# Package: Foostats::Filter — request filtering and logging -# - Purpose: Identify odd URI patterns and excessive requests per second per IP. -# - Notes: Maintains an in-process blocklist for the current run. -package Foostats::Filter { - use String::Util qw(contains startswith endswith); - - # Sub: new - # - Purpose: Construct a filter with odd patterns and a log path. - # - Params: $odds_file (str) pattern list; $log_path (str) append-only log file. - # - Return: blessed Foostats::Filter instance. - sub new ($class, $odds_file, $log_path) { - say "Logging filter to $log_path"; - my @odds = FileHelper::read_lines($odds_file); - bless { odds => \@odds, log_path => $log_path }, $class; - } - - # Sub: ok - # - Purpose: Check if an event passes filters; updates block state/logging. - # - Params: $event (hashref) normalized request. - # - Return: true if allowed; false if blocked. - sub ok ($self, $event) { - state %blocked = (); - return false if exists $blocked{ $event->{ip_hash} }; - - if ($self->odd($event) or $self->excessive($event)) { - ($blocked{ $event->{ip_hash} } //= 0)++; - return false; - } - else { - return true; - } - } - - # Sub: odd - # - Purpose: Match URI path against user-provided odd patterns (substring match). - # - Params: $event (hashref) with uri_path. - # - Return: true if odd (blocked), false otherwise. - sub odd ($self, $event) { - \my $uri_path = \$event->{uri_path}; - - for ($self->{odds}->@*) { - next if !defined $_ || $_ eq '' || /^\s*#/; - next unless contains($uri_path, $_); - $self->log('WARN', $uri_path, "contains $_ and is odd and will therefore be blocked!"); - return true; - } - - $self->log('OK', $uri_path, "appears fine..."); - return false; - } - - # Sub: log - # - Purpose: Deduplicated append-only logging for filter decisions. - # - Params: $severity (OK|WARN), $subject (str), $message (str). - # - Return: undef. - sub log ($self, $severity, $subject, $message) { - state %dedup; - - # Don't log if path was already logged - return if exists $dedup{$subject}; - $dedup{$subject} = 1; - - open(my $fh, '>>', $self->{log_path}) or die $self->{log_path} . ": $!"; - print $fh "$severity: $subject $message\n"; - close($fh); - } - - # Sub: excessive - # - Purpose: Block if an IP makes more than one request within the same second. - # - Params: $event (hashref) with time and ip_hash. - # - Return: true if blocked; false otherwise. - sub excessive ($self, $event) { - \my $time = \$event->{time}; - \my $ip_hash = \$event->{ip_hash}; - - state $last_time = $time; # Time with second: 'HH:MM:SS' - state %count = (); # IPs accessing within the same second! - - if ($last_time ne $time) { - $last_time = $time; - %count = (); - return false; - } - - # IP requested site more than once within the same second!? - if (1 < ++($count{$ip_hash} //= 0)) { - $self->log('WARN', $ip_hash, "blocked due to excessive requesting..."); - return true; - } - - return false; - } -} - -# Package: Foostats::Aggregator — in-memory stats builder -# - Purpose: Apply filters and accumulate counts, unique IPs per feed/page. -package Foostats::Aggregator { - use String::Util qw(contains startswith endswith); - - use constant { - ATOM_FEED_URI => '/gemfeed/atom.xml', - GEMFEED_URI => '/gemfeed/index.gmi', - GEMFEED_URI_2 => '/gemfeed/', - }; - - # Sub: new - # - Purpose: Construct aggregator with a filter and empty stats store. - # - Params: $odds_file (str), $odds_log (str). - # - Return: Foostats::Aggregator instance. - sub new ($class, $odds_file, $odds_log) { - bless { filter => Foostats::Filter->new($odds_file, $odds_log), stats => {} }, $class; - } - - # Sub: add - # - Purpose: Apply filter, update counts and unique-IP sets, and return event. - # - Params: $event (hashref) normalized event; ignored if undef. - # - Return: $event; filtered events increment filtered count only. - sub add ($self, $event) { - return undef unless defined $event; - - my $date = $event->{date}; - my $date_key = $event->{proto} . "_$date"; - - # Stats data model per protocol+day (key: "proto_YYYYMMDD"): - # - count: per-proto request count, per IP version, and filtered count - # - feed_ips: unique IPs per feed type (atom_feed, gemfeed) - # - page_ips: unique IPs per host and per URL - $self->{stats}{$date_key} //= { - count => { filtered => 0, }, - feed_ips => { - atom_feed => {}, - gemfeed => {}, - }, - page_ips => { - hosts => {}, - urls => {}, - }, - }; - - \my $s = \$self->{stats}{$date_key}; - unless ($self->{filter}->ok($event)) { - $s->{count}{filtered}++; - return $event; - } - - $self->add_count($s, $event); - $self->add_page_ips($s, $event) unless $self->add_feed_ips($s, $event); - return $event; - } - - # Sub: add_count - # - Purpose: Increment totals by protocol and IP version. - # - Params: $stats (hashref) date bucket; $event (hashref). - # - Return: undef. - sub add_count ($self, $stats, $event) { - \my $c = \$stats->{count}; - \my $e = \$event; - - ($c->{ $e->{proto} } //= 0)++; - ($c->{ $e->{ip_proto} } //= 0)++; - } - - # Sub: add_feed_ips - # - Purpose: If event hits feed endpoints, add unique IP and short-circuit. - # - Params: $stats (hashref), $event (hashref). - # - Return: 1 if feed matched; 0 otherwise. - sub add_feed_ips ($self, $stats, $event) { - \my $f = \$stats->{feed_ips}; - \my $e = \$event; - - # Atom feed (exact path match, allow optional query string) - if ($e->{uri_path} =~ m{^/gemfeed/atom\.xml(?:[?#].*)?$}) { - ($f->{atom_feed}->{ $e->{ip_hash} } //= 0)++; - return 1; - } - - # Gemfeed index: '/gemfeed/' or '/gemfeed/index.gmi' (optionally with query) - if ($e->{uri_path} =~ m{^/gemfeed/(?:index\.gmi)?(?:[?#].*)?$}) { - ($f->{gemfeed}->{ $e->{ip_hash} } //= 0)++; - return 1; - } - - return 0; - } - - # Sub: add_page_ips - # - Purpose: Track unique IPs per host and per URL for .html/.gmi pages. - # - Params: $stats (hashref), $event (hashref). - # - Return: undef. - sub add_page_ips ($self, $stats, $event) { - \my $e = \$event; - \my $p = \$stats->{page_ips}; - - return if !endswith($e->{uri_path}, '.html') && !endswith($e->{uri_path}, '.gmi'); - - ($p->{hosts}->{ $e->{host} }->{ $e->{ip_hash} } //= 0)++; - ($p->{urls}->{ $e->{host} . $e->{uri_path} }->{ $e->{ip_hash} } //= 0)++; - } -} - -# Package: Foostats::FileOutputter — write per-day stats to disk -# - Purpose: Persist aggregated stats to gzipped JSON files under a stats dir. -package Foostats::FileOutputter { - use JSON; - use Sys::Hostname; - use PerlIO::gzip; - - # Sub: new - # - Purpose: Create outputter with stats_dir; ensures directory exists. - # - Params: %args (hash) must include stats_dir. - # - Return: Foostats::FileOutputter instance. - sub new ($class, %args) { - my $self = bless \%args, $class; - mkdir $self->{stats_dir} or die $self->{stats_dir} . ": $!" unless -d $self->{stats_dir}; - return $self; - } - - # Sub: last_processed_date - # - Purpose: Determine the most recent processed date for a protocol for this host. - # - Params: $proto (str) 'web' or 'gemini'. - # - Return: YYYYMMDD int (0 if none found). - sub last_processed_date ($self, $proto) { - my $hostname = hostname(); - my @processed = glob $self->{stats_dir} . "/${proto}_????????.$hostname.json.gz"; - my ($date) = @processed ? ($processed[-1] =~ /_(\d{8})\.$hostname\.json.gz/) : 0; - return int($date); - } - - # Sub: write - # - Purpose: Write one gzipped JSON file per date bucket to stats_dir. - # - Params: none (uses $self->{stats}). - # - Return: undef. - sub write ($self) { - $self->for_dates( - sub ($self, $date_key, $stats) { - my $hostname = hostname(); - my $path = $self->{stats_dir} . "/${date_key}.$hostname.json.gz"; - FileHelper::write_json_gz $path, $stats; - } - ); - } - - # Sub: for_dates - # - Purpose: Iterate date-keyed stats in sorted order and call $cb. - # - Params: $cb (code) receives ($self, $date_key, $stats). - # - Return: undef. - sub for_dates ($self, $cb) { - $cb->($self, $_, $self->{stats}{$_}) for sort keys $self->{stats}->%*; - } -} - -# Package: Foostats::Replicator — pull partner stats files over HTTP(S) -# - Purpose: Fetch recent partner node stats into local stats dir. -package Foostats::Replicator { - use JSON; - use File::Basename; - use LWP::UserAgent; - use String::Util qw(endswith); - - # Sub: replicate - # - Purpose: For each proto and last 31 days, replicate newest files. - # - Params: $stats_dir (str) local dir; $partner_node (str) hostname. - # - Return: undef (best-effort fetches). - sub replicate ($stats_dir, $partner_node) { - say "Replicating from $partner_node"; - - for my $proto (qw(gemini web)) { - my $count = 0; - - for my $date (DateHelper::last_month_dates) { - my $file_base = "${proto}_${date}"; - my $dest_path = "${file_base}.$partner_node.json.gz"; - - replicate_file( - "https://$partner_node/foostats/$dest_path", - "$stats_dir/$dest_path", - $count++ < 3, # Always replicate the newest 3 files. - ); - } - } - } - - # Sub: replicate_file - # - Purpose: Download a single URL to a destination unless already present (unless forced). - # - Params: $remote_url (str) source; $dest_path (str) destination; $force (bool/int). - # - Return: undef; logs failures. - sub replicate_file ($remote_url, $dest_path, $force) { - - # $dest_path already exists, not replicating it - return if !$force && -f $dest_path; - - say "Replicating $remote_url to $dest_path (force:$force)... "; - my $response = LWP::UserAgent->new->get($remote_url); - unless ($response->is_success) { - say "\nFailed to fetch the file: " . $response->status_line; - return; - } - - FileHelper::write $dest_path, $response->decoded_content; - say 'done'; - } -} - -# Package: Foostats::Merger — merge per-host daily stats into a single view -# - Purpose: Merge multiple node files per day into totals and unique counts. -package Foostats::Merger { - - # Sub: merge - # - Purpose: Produce merged stats for the last month (date => stats hashref). - # - Params: $stats_dir (str) directory with daily gz JSON files. - # - Return: hash (not ref) of date => merged stats. - sub merge ($stats_dir) { - my %merge; - $merge{$_} = merge_for_date($stats_dir, $_) for DateHelper::last_month_dates; - return %merge; - } - - # Sub: merge_for_date - # - Purpose: Merge all node files for a specific date into one stats hashref. - # - Params: $stats_dir (str), $date (YYYYMMDD str/int). - # - Return: { feed_ips => {...}, count => {...}, page_ips => {...} }. - sub merge_for_date ($stats_dir, $date) { - printf "Merging for date %s\n", $date; - my @stats = stats_for_date($stats_dir, $date); - return { - feed_ips => feed_ips(@stats), - count => count(@stats), - page_ips => page_ips(@stats), - }; - } - - # Sub: merge_ips - # - Purpose: Deep-ish merge helper: sums numbers, merges hash-of-hash counts. - # - Params: $a (hashref target), $b (hashref source), $key_transform (code|undef). - # - Return: undef; updates $a in place; dies on incompatible types. - sub merge_ips ($a, $b, $key_transform = undef) { - my sub merge ($a, $b) { - while (my ($key, $val) = each %$b) { - $a->{$key} //= 0; - $a->{$key} += $val; - } - } - - my $is_num = qr/^\d+(\.\d+)?$/; - - while (my ($key, $val) = each %$b) { - $key = $key_transform->($key) if defined $key_transform; - - if (not exists $a->{$key}) { - $a->{$key} = $val; - } - elsif (ref($a->{$key}) eq 'HASH' && ref($val) eq 'HASH') { - merge($a->{$key}, $val); - } - elsif ($a->{$key} =~ $is_num && $val =~ $is_num) { - $a->{$key} += $val; - } - else { - die "Not merging tkey '%s' (ref:%s): '%s' (ref:%s) with '%s' (ref:%s)\n", - $key, - ref($key), $a->{$key}, - ref($a->{$key}), - $val, - ref($val); - } - } - } - - # Sub: feed_ips - # - Purpose: Merge feed unique-IP sets from per-proto stats into totals. - # - Params: @stats (list of stats hashrefs) each with {proto, feed_ips}. - # - Return: hashref with Total and per-proto feed counts. - sub feed_ips (@stats) { - my (%gemini, %web); - - for my $stats (@stats) { - my $merge = $stats->{proto} eq 'web' ? \%web : \%gemini; - printf "Merging proto %s feed IPs\n", $stats->{proto}; - merge_ips($merge, $stats->{feed_ips}); - } - - my %total; - merge_ips(\%total, $web{$_}) for keys %web; - merge_ips(\%total, $gemini{$_}) for keys %gemini; - - my %merge = ( - 'Total' => scalar keys %total, - 'Gemini Gemfeed' => scalar keys $gemini{gemfeed}->%*, - 'Gemini Atom' => scalar keys $gemini{atom_feed}->%*, - 'Web Gemfeed' => scalar keys $web{gemfeed}->%*, - 'Web Atom' => scalar keys $web{atom_feed}->%*, - ); - - return \%merge; - } - - # Sub: count - # - Purpose: Sum request counters across stats for the day. - # - Params: @stats (list of stats hashrefs) each with {count}. - # - Return: hashref of summed counters. - sub count (@stats) { - my %merge; - - for my $stats (@stats) { - while (my ($key, $val) = each $stats->{count}->%*) { - $merge{$key} //= 0; - $merge{$key} += $val; - } - } - - return \%merge; - } - - # Sub: page_ips - # - Purpose: Merge unique IPs per host and per URL; coalesce truncated endings. - # - Params: @stats (list of stats hashrefs) with {page_ips}{urls,hosts}. - # - Return: hashref with urls/hosts each mapping => unique counts. - sub page_ips (@stats) { - my %merge = ( - urls => {}, - hosts => {} - ); - - for my $key (keys %merge) { - merge_ips( - $merge{$key}, - $_->{page_ips}->{$key}, - sub ($key) { - $key =~ s/\.gmi$/\.html/; - $key; - } - ) for @stats; - - # Keep only uniq IP count - $merge{$key}->{$_} = scalar keys $merge{$key}->{$_}->%* for keys $merge{$key}->%*; - } - - return \%merge; - } - - # Sub: stats_for_date - # - Purpose: Load all stats files for a date across protos; tag proto/path. - # - Params: $stats_dir (str), $date (YYYYMMDD). - # - Return: list of stats hashrefs. - sub stats_for_date ($stats_dir, $date) { - my @stats; - - for my $proto (qw(gemini web)) { - for my $path (<$stats_dir/${proto}_${date}.*.json.gz>) { - printf "Reading %s\n", $path; - push @stats, FileHelper::read_json_gz($path); - @{ $stats[-1] }{qw(proto path)} = ($proto, $path); - } - } - - return @stats; - } -} - -# Package: Foostats::Reporter — build gemtext/HTML daily and summary reports -# - Purpose: Render daily reports and rolling summaries (30/365), and index pages. -package Foostats::Reporter { - use Time::Piece; - use HTML::Entities qw(encode_entities); - - our @TRUNCATED_URL_MAPPINGS; - - sub reset_truncated_url_mappings { @TRUNCATED_URL_MAPPINGS = (); } - - sub _record_truncated_url_mapping { - my ($truncated, $original) = @_; - push @TRUNCATED_URL_MAPPINGS, { truncated => $truncated, original => $original }; - } - - sub _lookup_full_url_for { - my ($candidate) = @_; - for my $idx (0 .. $#TRUNCATED_URL_MAPPINGS) { - my $entry = $TRUNCATED_URL_MAPPINGS[$idx]; - next unless $entry->{truncated} eq $candidate; - my $original = $entry->{original}; - splice @TRUNCATED_URL_MAPPINGS, $idx, 1; - return $original; - } - return undef; - } - - # Sub: truncate_url - # - Purpose: Middle-ellipsize long URLs to fit within a target length. - # - Params: $url (str), $max_length (int default 100). - # - Return: possibly truncated string. - sub truncate_url { - my ($url, $max_length) = @_; - $max_length //= 100; # Default to 100 characters - return $url if length($url) <= $max_length; - - # Calculate how many characters we need to remove - my $ellipsis = '...'; - my $ellipsis_length = length($ellipsis); - my $available_length = $max_length - $ellipsis_length; - - # Split available length between start and end, favoring the end - my $keep_start = int($available_length * 0.4); # 40% for start - my $keep_end = $available_length - $keep_start; # 60% for end - - my $start = substr($url, 0, $keep_start); - my $end = substr($url, -$keep_end); - - return $start . $ellipsis . $end; - } - - # Sub: truncate_urls_for_table - # - Purpose: Truncate URL cells in-place to fit target table width. - # - Params: $url_rows (arrayref of [url,count]), $count_column_header (str). - # - Return: undef; mutates $url_rows. - sub truncate_urls_for_table { - my ($url_rows, $count_column_header) = @_; - - # Calculate the maximum width needed for the count column - my $max_count_width = length($count_column_header); - for my $row (@$url_rows) { - my $count_width = length($row->[1]); - $max_count_width = $count_width if $count_width > $max_count_width; - } - - # Row format: "| URL... | count |" with padding - # Calculate: "| " (2) + URL + " | " (3) + count_with_padding + " |" (2) - my $max_url_length = 100 - 7 - $max_count_width; - $max_url_length = 70 if $max_url_length > 70; # Cap at reasonable length - - # Truncate URLs in place - for my $row (@$url_rows) { - my $original = $row->[0]; - my $truncated = truncate_url($original, $max_url_length); - if ($truncated ne $original) { - _record_truncated_url_mapping($truncated, $original); - } - $row->[0] = $truncated; - } - } - - # Sub: format_table - # - Purpose: Render a simple monospace table from headers and rows. - # - Params: $headers (arrayref), $rows (arrayref of arrayrefs). - # - Return: string with lines separated by \n. - sub format_table { - my ($headers, $rows) = @_; - - my @widths; - for my $col (0 .. $#{$headers}) { - my $max_width = length($headers->[$col]); - for my $row (@$rows) { - my $len = length($row->[$col]); - $max_width = $len if $len > $max_width; - } - push @widths, $max_width; - } - - my $header_line = '|'; - my $separator_line = '|'; - for my $col (0 .. $#{$headers}) { - $header_line .= sprintf(" %-*s |", $widths[$col], $headers->[$col]); - $separator_line .= '-' x ($widths[$col] + 2) . '|'; - } - - my @table_lines; - push @table_lines, $separator_line; # Add top terminator - push @table_lines, $header_line; - push @table_lines, $separator_line; - - for my $row (@$rows) { - my $row_line = '|'; - for my $col (0 .. $#{$row}) { - $row_line .= sprintf(" %-*s |", $widths[$col], $row->[$col]); - } - push @table_lines, $row_line; - } - - push @table_lines, $separator_line; # Add bottom terminator - - return join("\n", @table_lines); - } - - # Convert gemtext to HTML - # Sub: gemtext_to_html - # - Purpose: Convert a subset of Gemtext to compact HTML, incl. code blocks and lists. - # - Params: $content (str) Gemtext. - # - Return: HTML string (fragment). - sub gemtext_to_html { - my ($content) = @_; - my $html = ""; - my @lines = split /\n/, $content; - my $i = 0; - - while ($i < @lines) { - my $line = $lines[$i]; - - if ($line =~ /^```/) { - my @block_lines; - $i++; # Move past the opening ``` - while ($i < @lines && $lines[$i] !~ /^```/) { - push @block_lines, $lines[$i]; - $i++; - } - $html .= _gemtext_to_html_code_block(\@block_lines); - } - elsif ($line =~ /^### /) { - $html .= _gemtext_to_html_heading($line); - } - elsif ($line =~ /^## /) { - $html .= _gemtext_to_html_heading($line); - } - elsif ($line =~ /^# /) { - $html .= _gemtext_to_html_heading($line); - } - elsif ($line =~ /^=> /) { - $html .= _gemtext_to_html_link($line); - } - elsif ($line =~ /^\* /) { - my @list_items; - while ($i < @lines && $lines[$i] =~ /^\* /) { - push @list_items, $lines[$i]; - $i++; - } - $html .= _gemtext_to_html_list(\@list_items); - $i--; # Decrement to re-evaluate the current line in the outer loop - } - elsif ($line !~ /^\s*$/) { - $html .= _gemtext_to_html_paragraph($line); - } - - # Else, it's a blank line, which we skip for compact output. - $i++; - } - - return $html; - } - - sub _gemtext_to_html_code_block { - my ($lines) = @_; - if (is_ascii_table($lines)) { - return convert_ascii_table_to_html($lines); - } - else { - my $html = "<pre>\n"; - for my $code_line (@$lines) { - $html .= encode_entities($code_line) . "\n"; - } - $html .= "</pre>\n"; - return $html; - } - } - - sub _gemtext_to_html_heading { - my ($line) = @_; - if ($line =~ /^### (.*)/) { - return "<h3>" . encode_entities($1) . "</h3>\n"; - } - elsif ($line =~ /^## (.*)/) { - return "<h2>" . encode_entities($1) . "</h2>\n"; - } - elsif ($line =~ /^# (.*)/) { - return "<h1>" . encode_entities($1) . "</h1>\n"; - } - return ''; - } - - sub _gemtext_to_html_link { - my ($line) = @_; - if ($line =~ /^=> (\S+)\s+(.*)/) { - my ($url, $text) = ($1, $2); - - # Drop 365-day summary links from HTML output - return '' if $url =~ /(?:^|[\/.])365day_summary_\d{8}\.gmi$/; - - # Convert .gmi links to .html - $url =~ s/\.gmi$/\.html/; - return - "<p><a href=\"" - . encode_entities($url) . "\">" - . encode_entities($text) - . "</a></p>\n"; - } - return ''; - } - - sub _gemtext_to_html_list { - my ($lines) = @_; - my $html = "<ul>\n"; - for my $line (@$lines) { - if ($line =~ /^\* (.*)/) { - $html .= "<li>" . linkify_text($1) . "</li>\n"; - } - } - $html .= "</ul>\n"; - return $html; - } - - sub _gemtext_to_html_paragraph { - my ($line) = @_; - return "<p>" . linkify_text($line) . "</p>\n"; - } - - # Check if the lines form an ASCII table - # Sub: is_ascii_table - # - Purpose: Heuristically detect if a code block is an ASCII table. - # - Params: $lines (arrayref of strings). - # - Return: 1 if likely table; 0 otherwise. - sub is_ascii_table { - my ($lines) = @_; - return 0 if @$lines < 3; # Need at least header, separator, and one data row - - # Check for separator lines with dashes and pipes - for my $line (@$lines) { - return 1 if $line =~ /^\|?[\s\-]+\|/; - } - return 0; - } - - # Convert ASCII table to HTML table - # Sub: convert_ascii_table_to_html - # - Purpose: Convert simple ASCII table lines to an HTML <table>. - # - Params: $lines (arrayref of strings). - # - Return: HTML string. - sub convert_ascii_table_to_html { - my ($lines) = @_; - my $html = "<table>\n"; - my $row_count = 0; - my $total_col_idx = -1; - - for my $line (@$lines) { - - # Skip separator lines - next if $line =~ /^\|?[\s\-]+\|/ && $line =~ /\-/; - - # Parse table row - my @cells = split /\s*\|\s*/, $line; - @cells = grep { length($_) > 0 } @cells; # Remove empty cells - - if (@cells) { - my $is_total_row = (trim($cells[0]) eq 'Total'); - $html .= "<tr>\n"; - - if ($row_count == 0) { # Header row - for my $i (0 .. $#cells) { - if (trim($cells[$i]) eq 'Total') { - $total_col_idx = $i; - last; - } - } - } - - my $tag = ($row_count == 0) ? "th" : "td"; - for my $i (0 .. $#cells) { - my $val = trim($cells[$i]); - my $cell_content = linkify_text($val); - - if ($is_total_row || ($i == $total_col_idx && $row_count > 0)) { - $html .= " <$tag><b>" . $cell_content . "</b></$tag>\n"; - } - else { - $html .= " <$tag>" . $cell_content . "</$tag>\n"; - } - } - $html .= "</tr>\n"; - $row_count++; - } - } - - $html .= "</table>\n"; - return $html; - } - - # Trim whitespace from string - # Sub: trim - # - Purpose: Strip leading/trailing whitespace. - # - Params: $str (str). - # - Return: trimmed string. - sub trim { - my ($str) = @_; - $str =~ s/^\s+//; - $str =~ s/\s+$//; - return $str; - } - - # Build an href for a token that looks like a URL or FQDN - # Sub: _guess_href - # - Purpose: Infer absolute href for a token (supports gemini for .gmi). - # - Params: $token (str) token from text. - # - Return: href string or undef. - sub _guess_href { - my ($token) = @_; - my $t = $token; - $t =~ s/^\s+//; - $t =~ s/\s+$//; - - # Already absolute http(s) - return $t if $t =~ m{^https?://}i; - - # Extract trailing punctuation to avoid including it in href - my $trail = ''; - if ($t =~ s{([)\]\}.,;:!?]+)$}{}) { $trail = $1; } - - # host[/path] - if ($t =~ m{^([A-Za-z0-9.-]+\.[A-Za-z]{2,})(/[^\s<]*)?$}) { - my ($host, $path) = ($1, $2 // ''); - my $is_gemini = defined($path) && $path =~ /\.gmi(?:[?#].*)?$/i; - my $scheme = 'https'; - - # If truncated, fall back to host root - my $href = sprintf('%s://%s%s', $scheme, $host, ($path eq '' ? '/' : $path)); - return ($href . $trail); - } - - return undef; - } - - # Turn any URLs/FQDNs in the provided text into anchors - # Sub: linkify_text - # - Purpose: Replace URL/FQDN tokens in text with HTML anchors. - # - Params: $text (str) input text. - # - Return: HTML string with entities encoded. - sub linkify_text { - my ($text) = @_; - return '' unless defined $text; - - my $out = ''; - my $pos = 0; - while ($text =~ m{((?:https?://)?[A-Za-z0-9.-]+\.[A-Za-z]{2,}(?:/[^\s<]*)?)}g) { - my $match = $1; - my $start = $-[1]; - my $end = $+[1]; - - # Emit preceding text - $out .= encode_entities(substr($text, $pos, $start - $pos)); - - # Separate trailing punctuation from the match - my ($core, $trail) = ($match, ''); - if ($core =~ s{([)\]\}.,;:!?]+)$}{}) { $trail = $1; } - - my $display = $core; - if (my $full = _lookup_full_url_for($core)) { - $display = $full; - } - - my $href = _guess_href($display); - if (!$href) { - $href = _guess_href($core); - } - - if ($href) { - $href =~ s/\.gmi$/\.html/i; - $out .= sprintf( - '<a href="%s">%s</a>%s', - encode_entities($href), encode_entities($display), - encode_entities($trail) - ); - } - else { - # Not a linkable token after all - $out .= encode_entities($match); - } - $pos = $end; - } - - # Remainder - $out .= encode_entities(substr($text, $pos)); - return $out; - } - - # Use HTML::Entities::encode_entities imported above - - # Generate HTML wrapper - # Sub: generate_html_page - # - Purpose: Wrap content in a minimal HTML5 page with a title and CSS reset. - # - Params: $title (str), $content (str) HTML fragment. - # - Return: full HTML page string. - sub generate_html_page { - my ($title, $content) = @_; - return qq{<!DOCTYPE html> -<html lang="en"> -<head> - <meta charset="UTF-8"> - <meta name="viewport" content="width=device-width, initial-scale=1.0"> - <title>$title</title> - <style> - /* Compact, full-width layout */ - :root { - --pad: 8px; - } - html, body { - height: 100%; - } - body { - font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; - line-height: 1.3; - margin: 0; - padding: var(--pad); - background: #fff; - color: #000; - } - /* Headings: smaller and tighter */ - h1, h2, h3 { margin: 0.5em 0 0.25em; font-weight: 600; } - h1 { font-size: 1em; } - h2 { font-size: 0.95em; } - h3 { font-size: 0.9em; } - /* Paragraphs and lists: minimal vertical rhythm */ - p { margin: 0.2em 0; } - ul { margin: 0.3em 0; padding-left: 1.2em; } - li { margin: 0.1em 0; } - /* Code blocks and tables */ - pre { - overflow-x: auto; - white-space: pre; - margin: 0.3em 0; - } - table { - border-collapse: collapse; - table-layout: auto; /* size columns by content */ - width: auto; /* do not stretch to full width */ - max-width: 100%; - margin: 0.5em 0; - font-size: 0.95em; - display: inline-table; /* keep as compact as content allows */ - } - th, td { - padding: 0.1em 0.3em; - text-align: left; - white-space: nowrap; /* avoid wide columns caused by wrapping */ - } - /* Links */ - a { color: #06c; text-decoration: underline; } - a:visited { color: #639; } - /* Rules */ - hr { border: none; border-top: 1px solid #ccc; margin: 0.5em 0; } - </style> -</head> -<body> -$content -</body> -</html> -}; - } - - # Sub: should_generate_daily_report - # - Purpose: Check if a daily report should be generated based on file existence and age. - # - Params: $date (YYYYMMDD), $report_path (str), $html_report_path (str). - # - Return: 1 if report should be generated, 0 otherwise. - sub should_generate_daily_report { - my ($date, $report_path, $html_report_path) = @_; - - my ($year, $month, $day) = $date =~ /(\d{4})(\d{2})(\d{2})/; - - # Calculate age of the data based on date in filename - my $today = Time::Piece->new(); - my $file_date = Time::Piece->strptime($date, '%Y%m%d'); - my $age_days = ($today - $file_date) / (24 * 60 * 60); - - if (-e $report_path && -e $html_report_path) { - - # Files exist - if ($age_days <= 3) { - - # Data is recent (within 3 days), regenerate it - say "Regenerating daily report for $year-$month-$day (data age: " - . sprintf("%.1f", $age_days) - . " days)"; - return 1; - } - else { - # Data is old (older than 3 days), skip if files exist - say "Skipping daily report for $year-$month-$day (files exist, data age: " - . sprintf("%.1f", $age_days) - . " days)"; - return 0; - } - } - else { - # File doesn't exist, generate it - say "Generating new daily report for $year-$month-$day (file doesn't exist, data age: " - . sprintf("%.1f", $age_days) - . " days)"; - return 1; - } - } - - sub generate_feed_stats_section { - my ($stats) = @_; - my $report_content = "### Feed Statistics\n\n"; - my @feed_rows; - push @feed_rows, [ 'Total', $stats->{feed_ips}{'Total'} // 0 ]; - push @feed_rows, [ 'Gemini Gemfeed', $stats->{feed_ips}{'Gemini Gemfeed'} // 0 ]; - push @feed_rows, [ 'Gemini Atom', $stats->{feed_ips}{'Gemini Atom'} // 0 ]; - push @feed_rows, [ 'Web Gemfeed', $stats->{feed_ips}{'Web Gemfeed'} // 0 ]; - push @feed_rows, [ 'Web Atom', $stats->{feed_ips}{'Web Atom'} // 0 ]; - $report_content .= "```\n"; - $report_content .= format_table([ 'Feed Type', 'Count' ], \@feed_rows); - $report_content .= "\n```\n\n"; - return $report_content; - } - - sub generate_top_n_table { - my (%args) = @_; - my $title = $args{title}; - my $data = $args{data}; - my $headers = $args{headers}; - my $limit = $args{limit} // 50; - my $is_url = $args{is_url} // 0; - - my $report_content = "### $title\n\n"; - my @rows; - my @sorted_keys = - sort { ($data->{$b} // 0) <=> ($data->{$a} // 0) } - keys %$data; - my $truncated = @sorted_keys > $limit; - @sorted_keys = @sorted_keys[ 0 .. $limit - 1 ] if $truncated; - - for my $key (@sorted_keys) { - push @rows, [ $key, $data->{$key} // 0 ]; - } - - if ($is_url) { - truncate_urls_for_table(\@rows, $headers->[1]); - } - - $report_content .= "```\n"; - $report_content .= format_table($headers, \@rows); - $report_content .= "\n```\n"; - if ($truncated) { - $report_content .= "\n... and more (truncated to $limit entries).\n"; - } - $report_content .= "\n"; - return $report_content; - } - - sub generate_top_urls_section { - my ($stats) = @_; - return generate_top_n_table( - title => 'Top 50 URLs', - data => $stats->{page_ips}{urls}, - headers => [ 'URL', 'Unique Visitors' ], - is_url => 1, - ); - } - - sub generate_top_hosts_section { - my ($stats) = @_; - return generate_top_n_table( - title => 'Page Statistics (by Host)', - data => $stats->{page_ips}{hosts}, - headers => [ 'Host', 'Unique Visitors' ], - ); - } - - sub generate_summary_section { - my ($stats) = @_; - my $report_content = "### Summary\n\n"; - my $total_requests = - ($stats->{count}{gemini} // 0) + ($stats->{count}{web} // 0); - $report_content .= "* Total requests: $total_requests\n"; - $report_content .= - "* Filtered requests: " . ($stats->{count}{filtered} // 0) . "\n"; - $report_content .= - "* Gemini requests: " . ($stats->{count}{gemini} // 0) . "\n"; - $report_content .= - "* Web requests: " . ($stats->{count}{web} // 0) . "\n"; - $report_content .= - "* IPv4 requests: " . ($stats->{count}{IPv4} // 0) . "\n"; - $report_content .= - "* IPv6 requests: " . ($stats->{count}{IPv6} // 0) . "\n\n"; - return $report_content; - } - - # Sub: report - # - Purpose: Generate daily .gmi and .html reports per date, then summaries and index. - # - Params: $stats_dir, $output_dir, $html_output_dir, %merged (date => stats). - # - Return: undef. - sub report { - my ($stats_dir, $output_dir, $html_output_dir, %merged) = @_; - for my $date (sort { $b cmp $a } keys %merged) { - my $stats = $merged{$date}; - next unless $stats->{count}; - - my ($year, $month, $day) = $date =~ /(\d{4})(\d{2})(\d{2})/; - - my $report_path = "$output_dir/$date.gmi"; - my $html_report_path = "$html_output_dir/$date.html"; - - next unless should_generate_daily_report($date, $report_path, $html_report_path); - - reset_truncated_url_mappings(); - my $report_content = "## Stats for $year-$month-$day\n\n"; - $report_content .= generate_feed_stats_section($stats); - $report_content .= generate_top_urls_section($stats); - $report_content .= generate_top_hosts_section($stats); - $report_content .= generate_summary_section($stats); - - # Add links to summary reports (only monthly) - $report_content .= "## Related Reports\n\n"; - my $now = localtime; - my $current_date = $now->strftime('%Y%m%d'); - $report_content .= "=> ./30day_summary_$current_date.gmi 30-Day Summary Report\n\n"; - - # Ensure output directory exists - mkdir $output_dir unless -d $output_dir; - - # $report_path already defined above - say "Writing report to $report_path"; - FileHelper::write($report_path, $report_content); - - # Also write HTML version - mkdir $html_output_dir unless -d $html_output_dir; - my $html_path = "$html_output_dir/$date.html"; - my $html_content = gemtext_to_html($report_content); - my $html_page = generate_html_page("Stats for $year-$month-$day", $html_content); - say "Writing HTML report to $html_path"; - FileHelper::write($html_path, $html_page); - reset_truncated_url_mappings(); - } - - # Generate summary reports - generate_summary_report(30, $stats_dir, $output_dir, $html_output_dir, %merged); - - # Generate index.gmi and index.html - generate_index($output_dir, $html_output_dir); - } - - # Sub: generate_summary_report - # - Purpose: Generate N-day rolling summary in .gmi (+.html except 365-day). - # - Params: $days (int), $stats_dir, $output_dir, $html_output_dir, %merged. - # - Return: undef. - sub generate_summary_report { - my ($days, $stats_dir, $output_dir, $html_output_dir, %merged) = @_; - - # Get the last N days of dates - my @dates = sort { $b cmp $a } keys %merged; - my $max_index = $days - 1; - @dates = @dates[ 0 .. $max_index ] if @dates > $days; - - my $today = localtime; - my $report_date = $today->strftime('%Y%m%d'); - - # Build report content - reset_truncated_url_mappings(); - my $report_content = build_report_header($today, $days); - - # Order: feed counts -> Top URLs -> daily top 3 for last 30 days -> other tables - $report_content .= build_feed_statistics_section(\@dates, \%merged); - $report_content .= build_feed_statistics_daily_average_section(\@dates, \%merged); - - # Aggregate and add top lists - my ($all_hosts, $all_urls) = aggregate_hosts_and_urls(\@dates, \%merged); - $report_content .= build_top_urls_section($all_urls, $days); - $report_content .= build_top3_urls_last_n_days_per_day($stats_dir, 30, \%merged); - $report_content .= build_top_hosts_section($all_hosts, $days); - $report_content .= build_daily_summary_section(\@dates, \%merged); - - # Add links to other summary reports - $report_content .= build_summary_links($days, $report_date); - - # Ensure output directory exists and write the summary report - mkdir $output_dir unless -d $output_dir; - - my $report_path = "$output_dir/${days}day_summary_$report_date.gmi"; - say "Writing $days-day summary report to $report_path"; - FileHelper::write($report_path, $report_content); - - # Also write HTML version, except for 365-day summaries (HTML suppressed) - if ($days != 365) { - mkdir $html_output_dir unless -d $html_output_dir; - my $html_path = "$html_output_dir/${days}day_summary_$report_date.html"; - my $html_content = gemtext_to_html($report_content); - my $html_page = generate_html_page("$days-Day Summary Report", $html_content); - say "Writing HTML $days-day summary report to $html_path"; - FileHelper::write($html_path, $html_page); - } - else { - say "Skipping HTML generation for 365-day summary (Gemtext only)"; - } - - reset_truncated_url_mappings(); - } - - sub build_feed_statistics_daily_average_section { - my ($dates, $merged) = @_; - - my %totals; - my $days_with_stats = 0; - - for my $date (@$dates) { - my $stats = $merged->{$date}; - next unless $stats->{feed_ips}; - $days_with_stats++; - - for my $key (keys %{ $stats->{feed_ips} }) { - $totals{$key} += $stats->{feed_ips}{$key}; - } - } - - return "" unless $days_with_stats > 0; - - my @avg_rows; - my $total_avg = 0; - my $has_total = 0; - - # Separate 'Total' from other keys - my @other_keys; - for my $key (keys %totals) { - if ($key eq 'Total') { - $total_avg = sprintf("%.2f", $totals{$key} / $days_with_stats); - $has_total = 1; - } - else { - push @other_keys, $key; - } - } - - # Sort other keys and create rows - for my $key (sort @other_keys) { - my $avg = sprintf("%.2f", $totals{$key} / $days_with_stats); - push @avg_rows, [ $key, $avg ]; - } - - # Add Total row at the end - push @avg_rows, [ 'Total', $total_avg ] if $has_total; - - my $content = "### Feed Statistics Daily Average (Last 30 Days)\n\n```\n"; - $content .= format_table([ 'Feed Type', 'Daily Average' ], \@avg_rows); - $content .= "\n```\n\n"; - - return $content; - } - - # Sub: build_report_header - # - Purpose: Header section for summary reports. - # - Params: $today (Time::Piece), $days (int default 30). - # - Return: gemtext string. - sub build_report_header { - my ($today, $days) = @_; - $days //= 30; # Default to 30 days for backward compatibility - - my $content = "# $days-Day Summary Report\n\n"; - $content .= "Generated on " . $today->strftime('%Y-%m-%d') . "\n\n"; - return $content; - } - - # Sub: build_daily_summary_section - # - Purpose: Table of daily total counts over a period. - # - Params: $dates (arrayref YYYYMMDD), $merged (hashref date=>stats). - # - Return: gemtext string. - sub build_daily_summary_section { - my ($dates, $merged) = @_; - - my $content = "## Daily Summary Evolution (Last 30 Days)\n\n"; - $content .= "### Total Requests by Day\n\n```\n"; - - my @summary_rows; - for my $date (reverse @$dates) { - my $stats = $merged->{$date}; - next unless $stats->{count}; - - push @summary_rows, build_daily_summary_row($date, $stats); - } - - $content .= format_table([ 'Date', 'Filtered', 'Gemini', 'Web', 'IPv4', 'IPv6', 'Total' ], - \@summary_rows); - $content .= "\n```\n\n"; - - return $content; - } - - # Sub: build_daily_summary_row - # - Purpose: Build one table row with counts for a date. - # - Params: $date (YYYYMMDD), $stats (hashref). - # - Return: arrayref of cell strings. - sub build_daily_summary_row { - my ($date, $stats) = @_; - - my ($year, $month, $day) = $date =~ /(\d{4})(\d{2})(\d{2})/; - my $formatted_date = "$year-$month-$day"; - - my $total_requests = ($stats->{count}{gemini} // 0) + ($stats->{count}{web} // 0); - my $filtered = $stats->{count}{filtered} // 0; - my $gemini = $stats->{count}{gemini} // 0; - my $web = $stats->{count}{web} // 0; - my $ipv4 = $stats->{count}{IPv4} // 0; - my $ipv6 = $stats->{count}{IPv6} // 0; - - return [ $formatted_date, $filtered, $gemini, $web, $ipv4, $ipv6, $total_requests ]; - } - - # Sub: build_feed_statistics_section - # - Purpose: Table of feed unique counts by day over a period. - # - Params: $dates (arrayref), $merged (hashref). - # - Return: gemtext string. - sub build_feed_statistics_section { - my ($dates, $merged) = @_; - - my $content = "### Feed Statistics Evolution\n\n```\n"; - - my @feed_rows; - for my $date (reverse @$dates) { - my $stats = $merged->{$date}; - next unless $stats->{feed_ips}; - - push @feed_rows, build_feed_statistics_row($date, $stats); - } - - $content .= - format_table([ 'Date', 'Gem Feed', 'Gem Atom', 'Web Feed', 'Web Atom', 'Total' ], - \@feed_rows); - $content .= "\n```\n\n"; - - return $content; - } - - # Sub: build_feed_statistics_row - # - Purpose: Build one row of feed unique counts for a date. - # - Params: $date (YYYYMMDD), $stats (hashref). - # - Return: arrayref of cell strings. - sub build_feed_statistics_row { - my ($date, $stats) = @_; - - my ($year, $month, $day) = $date =~ /(\d{4})(\d{2})(\d{2})/; - my $formatted_date = "$year-$month-$day"; - - return [ - $formatted_date, - $stats->{feed_ips}{'Gemini Gemfeed'} // 0, - $stats->{feed_ips}{'Gemini Atom'} // 0, - $stats->{feed_ips}{'Web Gemfeed'} // 0, - $stats->{feed_ips}{'Web Atom'} // 0, - $stats->{feed_ips}{'Total'} // 0 - ]; - } - - # Sub: aggregate_hosts_and_urls - # - Purpose: Sum hosts and URLs across multiple days. - # - Params: $dates (arrayref), $merged (hashref). - # - Return: (\%all_hosts, \%all_urls). - sub aggregate_hosts_and_urls { - my ($dates, $merged) = @_; - - my %all_hosts; - my %all_urls; - - for my $date (@$dates) { - my $stats = $merged->{$date}; - next unless $stats->{page_ips}; - - # Aggregate hosts - while (my ($host, $count) = each %{ $stats->{page_ips}{hosts} }) { - $all_hosts{$host} //= 0; - $all_hosts{$host} += $count; - } - - # Aggregate URLs - while (my ($url, $count) = each %{ $stats->{page_ips}{urls} }) { - $all_urls{$url} //= 0; - $all_urls{$url} += $count; - } - } - - return (\%all_hosts, \%all_urls); - } - - sub build_top_hosts_section { - my ($all_hosts, $days) = @_; - $days //= 30; - - return generate_top_n_table( - title => "Top 50 Hosts (${days}-Day Total)", - data => $all_hosts, - headers => [ 'Host', 'Visitors' ], - ); - } - - # Sub: build_top_urls_section - # - Purpose: Build Top-50 URLs table for the aggregated period (with truncation). - # - Params: $all_urls (hashref), $days (int default 30). - # - Return: gemtext string. - sub build_top_urls_section { - my ($all_urls, $days) = @_; - $days //= 30; - - return generate_top_n_table( - title => "Top 50 URLs (${days}-Day Total)", - data => $all_urls, - headers => [ 'URL', 'Visitors' ], - is_url => 1, - ); - } - - # Sub: build_summary_links - # - Purpose: Links to other summary reports (30-day when not already on it). - # - Params: $current_days (int), $report_date (YYYYMMDD). - # - Return: gemtext string. - sub build_summary_links { - my ($current_days, $report_date) = @_; - - my $content = ''; - - # Only add link to 30-day summary when not on the 30-day report itself - if ($current_days != 30) { - $content .= "## Other Summary Reports\n\n"; - $content .= "=> ./30day_summary_$report_date.gmi 30-Day Summary Report\n\n"; - } - - return $content; - } - - # Sub: build_top3_urls_last_n_days_per_day - # - Purpose: For each of last N days, render the top URLs table. - # - Params: $stats_dir (str), $days (int default 30), $merged (hashref). - # - Return: gemtext string. - sub build_top3_urls_last_n_days_per_day { - my ($stats_dir, $days, $merged) = @_; - $days //= 30; - my $content = "## Top 5 URLs Per Day (Last ${days} Days)\n\n"; - - my @all = DateHelper::last_month_dates(); - my @dates = @all; - @dates = @all[ 0 .. $days - 1 ] if @all > $days; - return $content . "(no data)\n\n" unless @dates; - - for my $date (@dates) { - - # Prefer in-memory merged stats if available; otherwise merge from disk - my $stats = $merged->{$date}; - if (!$stats || !($stats->{page_ips} && $stats->{page_ips}{urls})) { - $stats = Foostats::Merger::merge_for_date($stats_dir, $date); - } - next unless $stats && $stats->{page_ips} && $stats->{page_ips}{urls}; - - my ($y, $m, $d) = $date =~ /(\d{4})(\d{2})(\d{2})/; - $content .= "### $y-$m-$d\n\n"; - - my $urls = $stats->{page_ips}{urls}; - my @sorted = sort { ($urls->{$b} // 0) <=> ($urls->{$a} // 0) } keys %$urls; - next unless @sorted; - my $limit = @sorted < 5 ? @sorted : 5; - @sorted = @sorted[ 0 .. $limit - 1 ]; - - my @rows; - for my $u (@sorted) { - $u =~ s/\.gmi$/\.html/; - push @rows, [ $u, $urls->{$u} // 0 ]; - } - truncate_urls_for_table(\@rows, 'Visitors'); - $content .= "```\n" . format_table([ 'URL', 'Visitors' ], \@rows) . "\n```\n\n"; - } - - return $content; - } - - # Sub: generate_index - # - Purpose: Create index.gmi/.html using the latest 30-day summary as content. - # - Params: $output_dir (str), $html_output_dir (str). - # - Return: undef. - sub generate_index { - my ($output_dir, $html_output_dir) = @_; - - # Find latest 30-day summary - opendir(my $dh, $output_dir) or die "Cannot open directory $output_dir: $!"; - my @gmi_files = grep { /\.gmi$/ && $_ ne 'index.gmi' } readdir($dh); - closedir($dh); - - my @summaries_30day = sort { $b cmp $a } grep { /^30day_summary_/ } @gmi_files; - my $latest_30 = $summaries_30day[0]; - - my $index_path = "$output_dir/index.gmi"; - mkdir $html_output_dir unless -d $html_output_dir; - my $html_path = "$html_output_dir/index.html"; - - if ($latest_30) { - - # Read 30-day summary content and use it as index - my $summary_path = "$output_dir/$latest_30"; - open my $sfh, '<', $summary_path or die "$summary_path: $!"; - local $/ = undef; - my $content = <$sfh>; - close $sfh; - - say "Writing index to $index_path (using $latest_30)"; - FileHelper::write($index_path, $content); - - # HTML: use existing 30-day summary HTML if present, else convert - (my $latest_html = $latest_30) =~ s/\.gmi$/.html/; - my $summary_html_path = "$html_output_dir/$latest_html"; - if (-e $summary_html_path) { - open my $hh, '<', $summary_html_path or die "$summary_html_path: $!"; - local $/ = undef; - my $html_page = <$hh>; - close $hh; - say "Writing HTML index to $html_path (copy of $latest_html)"; - FileHelper::write($html_path, $html_page); - } - else { - my $html_content = gemtext_to_html($content); - my $html_page = generate_html_page("30-Day Summary Report", $html_content); - say "Writing HTML index to $html_path (from gemtext)"; - FileHelper::write($html_path, $html_page); - } - return; - } - - # Fallback: minimal index if no 30-day summary found - my $fallback = "# Foostats Reports Index\n\n30-day summary not found.\n"; - say "Writing fallback index to $index_path"; - FileHelper::write($index_path, $fallback); - - my $html_content = gemtext_to_html($fallback); - my $html_page = generate_html_page("Foostats Reports Index", $html_content); - say "Writing fallback HTML index to $html_path"; - FileHelper::write($html_path, $html_page); - } -} - -package main; - -# Package: main — CLI entrypoint and orchestration -# - Purpose: Parse options and invoke parse/replicate/report flows. -use Getopt::Long; -use Sys::Hostname; - -# Sub: usage -# - Purpose: Print usage and exit 0. -# - Params: none. -# - Return: never (exits). -sub usage { - print <<~"USAGE"; - Usage: $0 [options] - - Options: - --parse-logs Parse web and gemini logs. - --replicate Replicate stats from partner node. - --report Generate a report from the stats. - --all Perform all of the above actions (parse, replicate, report). - --stats-dir <path> Directory to store stats files. - Default: /var/www/htdocs/buetow.org/self/foostats - --output-dir <path> Directory to write .gmi report files. - Default: /var/gemini/stats.foo.zone - --html-output-dir <path> Directory to write .html report files. - Default: /var/www/htdocs/gemtexter/stats.foo.zone - --odds-file <path> File with odd URI patterns to filter. - Default: <stats-dir>/fooodds.txt - --filter-log <path> Log file for filtered requests. - Default: /var/log/fooodds - --partner-node <hostname> Hostname of the partner node for replication. - Default: fishfinger.buetow.org or blowfish.buetow.org - --version Show version information. - --help Show this help message. - USAGE - exit 0; -} - -# Sub: parse_logs -# - Purpose: Parse logs and persist aggregated stats files under $stats_dir. -# - Params: $stats_dir (str), $odds_file (str), $odds_log (str). -# - Return: undef. -sub parse_logs ($stats_dir, $odds_file, $odds_log) { - my $out = Foostats::FileOutputter->new(stats_dir => $stats_dir); - - $out->{stats} = Foostats::Logreader::parse_logs( - $out->last_processed_date('web'), - $out->last_processed_date('gemini'), - $odds_file, $odds_log - ); - - $out->write; -} - -# Sub: foostats_main -# - Purpose: Option parsing and execution of requested actions. -# - Params: none (reads @ARGV). -# - Return: exit code via program termination. -sub foostats_main { - my ($parse_logs, $replicate, $report, $all, $help, $version); - - # With default values - my $stats_dir = '/var/www/htdocs/buetow.org/self/foostats'; - my $odds_file = $stats_dir . '/fooodds.txt'; - my $odds_log = '/var/log/fooodds'; - my $output_dir; # Will default to $stats_dir/gemtext if not specified - my $html_output_dir; # Will default to /var/www/htdocs/gemtexter/stats.foo.zone if not specified - my $partner_node = - hostname eq 'fishfinger.buetow.org' - ? 'blowfish.buetow.org' - : 'fishfinger.buetow.org'; - - GetOptions - 'parse-logs!' => \$parse_logs, - 'filter-log=s' => \$odds_log, - 'odds-file=s' => \$odds_file, - 'replicate!' => \$replicate, - 'report!' => \$report, - 'all!' => \$all, - 'stats-dir=s' => \$stats_dir, - 'output-dir=s' => \$output_dir, - 'html-output-dir=s' => \$html_output_dir, - 'partner-node=s' => \$partner_node, - 'version' => \$version, - 'help|?' => \$help; - - if ($version) { - print "foostats " . VERSION . "\n"; - exit 0; - } - - usage() if $help; - - parse_logs($stats_dir, $odds_file, $odds_log) if $parse_logs or $all; - Foostats::Replicator::replicate($stats_dir, $partner_node) if $replicate or $all; - - # Set default output directories if not specified - $output_dir //= '/var/gemini/stats.foo.zone'; - $html_output_dir //= '/var/www/htdocs/gemtexter/stats.foo.zone'; - - Foostats::Reporter::report($stats_dir, $output_dir, $html_output_dir, - Foostats::Merger::merge($stats_dir)) - if $report - or $all; -} - -# Only run main flow when executed as a script, not when required (e.g., tests) -foostats_main() unless caller; diff --git a/gemfeed/examples/conf/frontends/scripts/gemtexter.sh.tpl b/gemfeed/examples/conf/frontends/scripts/gemtexter.sh.tpl deleted file mode 100644 index 2bba20c7..00000000 --- a/gemfeed/examples/conf/frontends/scripts/gemtexter.sh.tpl +++ /dev/null @@ -1,65 +0,0 @@ -#!/bin/sh - -PATH=$PATH:/usr/local/bin - -function ensure_site { - dir=$1 - repo=$2 - branch=$3 - - basename=$(basename $dir) - parent=$(dirname $dir) - - if [ ! -d $parent ]; then - mkdir -p $parent - fi - - cd $parent - if [ ! -e www.$basename ]; then - ln -s $basename www.$basename - fi - - if [ ! -e standby.$basename ]; then - ln -s $basename standby.$basename - fi - - if [ ! -d $basename ]; then - git clone $repo -b $branch --single-branch $basename - else - cd $basename - git pull - fi -} - -function ensure_links { - dir=$1 - target=$2 - - basename=$(basename $dir) - parent=$(dirname $dir) - - cd $parent - - if [ ! -e $target ]; then - ln -s $basename $target - fi - - if [ ! -e www.$target ]; then - ln -s $basename www.$target - fi - - if [ ! -e standby.$target ]; then - ln -s $basename standby.$target - fi -} - -for site in foo.zone; do - ensure_site \ - /var/gemini/$site \ - https://codeberg.org/snonux/$site \ - content-gemtext - ensure_site \ - /var/www/htdocs/gemtexter/$site \ - https://codeberg.org/snonux/$site \ - content-html -done diff --git a/gemfeed/examples/conf/frontends/scripts/rsync.sh.tpl b/gemfeed/examples/conf/frontends/scripts/rsync.sh.tpl deleted file mode 100644 index c8d7b004..00000000 --- a/gemfeed/examples/conf/frontends/scripts/rsync.sh.tpl +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/sh - -PATH=$PATH:/usr/local/bin - -# Sync Joern's content over to Fishfinger! -if [ `hostname -s` = fishfinger ]; then - rsync -av --delete rsync://blowfish.wg0.wan.buetow.org/joernshtdocs/ /var/www/htdocs/joern/ -fi diff --git a/gemfeed/examples/conf/frontends/scripts/taskwarrior.sh.tpl b/gemfeed/examples/conf/frontends/scripts/taskwarrior.sh.tpl deleted file mode 100644 index aaafbe98..00000000 --- a/gemfeed/examples/conf/frontends/scripts/taskwarrior.sh.tpl +++ /dev/null @@ -1,5 +0,0 @@ -PATH=$PATH:/usr/local/bin - -echo "Any tasks due before the next 14 days?" -# Using git user, as ssh keys are already there to sync the task db! -su - git -c '/usr/local/bin/task rc:/etc/taskrc due.before:14day minimal 2>/dev/null' diff --git a/gemfeed/examples/conf/frontends/var/nsd/etc/key.conf.tpl b/gemfeed/examples/conf/frontends/var/nsd/etc/key.conf.tpl deleted file mode 100644 index d8d6c76d..00000000 --- a/gemfeed/examples/conf/frontends/var/nsd/etc/key.conf.tpl +++ /dev/null @@ -1,4 +0,0 @@ -key: - name: blowfish.buetow.org - algorithm: hmac-sha256 - secret: "<%= $nsd_key %>" diff --git a/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.master.tpl b/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.master.tpl deleted file mode 100644 index 7f5ba56f..00000000 --- a/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.master.tpl +++ /dev/null @@ -1,17 +0,0 @@ -include: "/var/nsd/etc/key.conf" - -server: - hide-version: yes - verbosity: 1 - database: "" # disable database - debug-mode: no - -remote-control: - control-enable: yes - control-interface: /var/run/nsd.sock - -<% for my $zone (@$dns_zones) { %> -zone: - name: "<%= $zone %>" - zonefile: "master/<%= $zone %>.zone" -<% } %> diff --git a/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.slave.tpl b/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.slave.tpl deleted file mode 100644 index d9d93fe6..00000000 --- a/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.slave.tpl +++ /dev/null @@ -1,17 +0,0 @@ -include: "/var/nsd/etc/key.conf" - -server: - hide-version: yes - verbosity: 1 - database: "" # disable database - -remote-control: - control-enable: yes - control-interface: /var/run/nsd.sock - -<% for my $zone (@$dns_zones) { %> -zone: - name: "<%= $zone %>" - allow-notify: 23.88.35.144 blowfish.buetow.org - request-xfr: 23.88.35.144 blowfish.buetow.org -<% } %> diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/buetow.org.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/buetow.org.zone.tpl deleted file mode 100644 index 0a0fb36f..00000000 --- a/gemfeed/examples/conf/frontends/var/nsd/zones/master/buetow.org.zone.tpl +++ /dev/null @@ -1,124 +0,0 @@ -$ORIGIN buetow.org. -$TTL 4h -@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( - <%= time() %> ; serial - 1h ; refresh - 30m ; retry - 7d ; expire - 1h ) ; negative - IN NS fishfinger.buetow.org. - IN NS blowfish.buetow.org. - - 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover - 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover -master 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -master 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover - - IN MX 10 fishfinger.buetow.org. - IN MX 20 blowfish.buetow.org. - -cool IN NS ns-75.awsdns-09.com. -cool IN NS ns-707.awsdns-24.net. -cool IN NS ns-1081.awsdns-07.org. -cool IN NS ns-1818.awsdns-35.co.uk. - -paul 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -paul 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www.paul 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.paul 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.paul 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby.paul 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover - -blog 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -blog 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www.blog 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.blog 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.blog 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby.blog 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover - -tmp 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -tmp 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www.tmp 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.tmp 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.tmp 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby.tmp 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover - -<% for my $host (@$f3s_hosts) { -%> -<%= $host %>. 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -<%= $host %>. 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www.<%= $host %>. 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.<%= $host %>. 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.<%= $host %>. 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby.<%= $host %>. 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover -<% } -%> - -; So joern can directly preview the content before rsync happens from blowfish to fishfinger -joern IN CNAME blowfish -www.joern IN CNAME blowfish -standby.joern IN CNAME fishfinger - -dory 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -dory 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www.dory 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.dory 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.dory 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby.dory 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover - -ecat 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -ecat 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www.ecat 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.ecat 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.ecat 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby.ecat 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover - -fotos 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -fotos 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www.fotos 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.fotos 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.fotos 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby.fotos 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover - -git 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -git 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www.git 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.git 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.git 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby.git 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover - -blowfish 14400 IN A 23.88.35.144 -blowfish 14400 IN AAAA 2a01:4f8:c17:20f1::42 -blowfish IN MX 10 fishfinger.buetow.org. -blowfish IN MX 20 blowfish.buetow.org. -fishfinger 14400 IN A 46.23.94.99 -fishfinger 14400 IN AAAA 2a03:6000:6f67:624::99 -fishfinger IN MX 10 fishfinger.buetow.org. -fishfinger IN MX 20 blowfish.buetow.org. - -git1 1800 IN CNAME blowfish.buetow.org. -git2 1800 IN CNAME fishfinger.buetow.org. - -zapad.sofia 14400 IN CNAME 79-100-3-54.ip.btc-net.bg. -www2 14400 IN CNAME snonux.codeberg.page. -znc 1800 IN CNAME fishfinger.buetow.org. -www.znc 1800 IN CNAME fishfinger.buetow.org. -standby.znc 1800 IN CNAME fishfinger.buetow.org. -bnc 1800 IN CNAME fishfinger.buetow.org. -www.bnc 1800 IN CNAME fishfinger.buetow.org. - -protonmail._domainkey.paul IN CNAME protonmail.domainkey.d4xua2siwqfhvecokhuacmyn5fyaxmjk6q3hu2omv2z43zzkl73yq.domains.proton.ch. -protonmail2._domainkey.paul IN CNAME protonmail2.domainkey.d4xua2siwqfhvecokhuacmyn5fyaxmjk6q3hu2omv2z43zzkl73yq.domains.proton.ch. -protonmail3._domainkey.paul IN CNAME protonmail3.domainkey.d4xua2siwqfhvecokhuacmyn5fyaxmjk6q3hu2omv2z43zzkl73yq.domains.proton.ch. -paul IN TXT protonmail-verification=a42447901e320064d13e536db4d73ce600d715b7 -paul IN TXT v=spf1 include:_spf.protonmail.ch mx ~all -paul IN TXT v=DMARC1; p=none -paul IN MX 10 mail.protonmail.ch. -paul IN MX 20 mailsec.protonmail.ch. -paul IN MX 42 blowfish.buetow.org. -paul IN MX 42 fishfinger.buetow.org. - -* IN MX 10 fishfinger.buetow.org. -* IN MX 20 blowfish.buetow.org. diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/dtail.dev.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/dtail.dev.zone.tpl deleted file mode 100644 index d5196e04..00000000 --- a/gemfeed/examples/conf/frontends/var/nsd/zones/master/dtail.dev.zone.tpl +++ /dev/null @@ -1,21 +0,0 @@ -$ORIGIN dtail.dev. -$TTL 4h -@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( - <%= time() %> ; serial - 1h ; refresh - 30m ; retry - 7d ; expire - 1h ) ; negative - IN NS fishfinger.buetow.org. - IN NS blowfish.buetow.org. - - IN MX 10 fishfinger.buetow.org. - IN MX 20 blowfish.buetow.org. - - 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover - 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover -github 86400 IN CNAME mimecast.github.io. diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/foo.zone.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/foo.zone.zone.tpl deleted file mode 100644 index d0755c91..00000000 --- a/gemfeed/examples/conf/frontends/var/nsd/zones/master/foo.zone.zone.tpl +++ /dev/null @@ -1,34 +0,0 @@ -$ORIGIN foo.zone. -$TTL 4h -@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( - <%= time() %> ; serial - 1h ; refresh - 30m ; retry - 7d ; expire - 1h ) ; negative - IN NS fishfinger.buetow.org. - IN NS blowfish.buetow.org. - - IN MX 10 fishfinger.buetow.org. - IN MX 20 blowfish.buetow.org. - - 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover - 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover - -f3s 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -f3s 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www.f3s 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.f3s 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.f3s 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby.f3s 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover - -stats 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -stats 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www.stats 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.stats 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.stats 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -standby.stats 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/irregular.ninja.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/irregular.ninja.zone.tpl deleted file mode 100644 index d4f3d622..00000000 --- a/gemfeed/examples/conf/frontends/var/nsd/zones/master/irregular.ninja.zone.tpl +++ /dev/null @@ -1,23 +0,0 @@ -$ORIGIN irregular.ninja. -$TTL 4h -@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( - <%= time() %> ; serial - 1h ; refresh - 30m ; retry - 7d ; expire - 1h ) ; negative - IN NS fishfinger.buetow.org. - IN NS blowfish.buetow.org. - - 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover - 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover -www.alt 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www.alt 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -alt 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -alt 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby.alt 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby.alt 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/paul.cyou.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/paul.cyou.zone.tpl deleted file mode 100644 index fdffef4f..00000000 --- a/gemfeed/examples/conf/frontends/var/nsd/zones/master/paul.cyou.zone.tpl +++ /dev/null @@ -1,20 +0,0 @@ -$ORIGIN paul.cyou. -$TTL 4h -@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( - <%= time() %> ; serial - 1h ; refresh - 30m ; retry - 7d ; expire - 1h ) ; negative - IN NS fishfinger.buetow.org. - IN NS blowfish.buetow.org. - - IN MX 10 fishfinger.buetow.org. - IN MX 20 blowfish.buetow.org. - - 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover - 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/snonux.foo.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/snonux.foo.zone.tpl deleted file mode 100644 index a9d002ae..00000000 --- a/gemfeed/examples/conf/frontends/var/nsd/zones/master/snonux.foo.zone.tpl +++ /dev/null @@ -1,20 +0,0 @@ -$ORIGIN snonux.foo. -$TTL 4h -@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( - <%= time() %> ; serial - 1h ; refresh - 30m ; retry - 7d ; expire - 1h ) ; negative - IN NS fishfinger.buetow.org. - IN NS blowfish.buetow.org. - - IN MX 10 fishfinger.buetow.org. - IN MX 20 blowfish.buetow.org. - - 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover - 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover -www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover -standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover -standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover diff --git a/gemfeed/examples/conf/frontends/var/www/htdocs/buetow.org/self/index.txt.tpl b/gemfeed/examples/conf/frontends/var/www/htdocs/buetow.org/self/index.txt.tpl deleted file mode 100644 index 6b8979da..00000000 --- a/gemfeed/examples/conf/frontends/var/www/htdocs/buetow.org/self/index.txt.tpl +++ /dev/null @@ -1 +0,0 @@ -Welcome to <%= $hostname.'.'.$domain %>! diff --git a/gemfeed/examples/conf/playground/README.md b/gemfeed/examples/conf/playground/README.md deleted file mode 100644 index 0ed0975c..00000000 --- a/gemfeed/examples/conf/playground/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Playground - -Some playground/testing with Rex! diff --git a/gemfeed/examples/conf/playground/Rexfile b/gemfeed/examples/conf/playground/Rexfile deleted file mode 100644 index 056a82e8..00000000 --- a/gemfeed/examples/conf/playground/Rexfile +++ /dev/null @@ -1,24 +0,0 @@ -use Rex -feature => ['1.14', 'exec_autodie']; -use Rex::Logger; -use Rex::Commands::Cron; - -group openbsd_canary => 'blowfish.buetow.org:2'; - -user 'rex'; -sudo TRUE; - -parallelism 5; - -desc 'Cron test'; -task 'openbsd_cron_test', group => 'openbsd_canary', sub { - cron add => '_gogios', { - minute => '5', - hour => '*', - day_of_month => '*', - month => '*', - day_of_week => '*', - command => '/path/to/your/cronjob', - }; -}; - -# vim: syntax=perl diff --git a/gemfeed/examples/conf/playground/openbsd_cron_test.debug.txt b/gemfeed/examples/conf/playground/openbsd_cron_test.debug.txt deleted file mode 100644 index 30fd1c09..00000000 --- a/gemfeed/examples/conf/playground/openbsd_cron_test.debug.txt +++ /dev/null @@ -1,766 +0,0 @@ -[paul@earth]~/git/rexfiles/testing% rex -m -d openbsd_cron_test &> openbsd_cron_test.debug.txt -[2023-07-30 13:36:36] DEBUG - This is Rex version: 1.14.2 -[2023-07-30 13:36:36] DEBUG - Command Line Parameters -[2023-07-30 13:36:36] DEBUG - m = 1 -[2023-07-30 13:36:36] DEBUG - d = 1 -[2023-07-30 13:36:36] DEBUG - Creating lock-file (Rexfile.lock) -[2023-07-30 13:36:36] DEBUG - Loading Rexfile -[2023-07-30 13:36:36] DEBUG - Disabling usage of a tty -[2023-07-30 13:36:36] DEBUG - Activating autodie. -[2023-07-30 13:36:36] DEBUG - Using Net::OpenSSH if present. -[2023-07-30 13:36:36] DEBUG - Add service check. -[2023-07-30 13:36:36] DEBUG - Setting set() to not append data. -[2023-07-30 13:36:36] DEBUG - Registering CMDB as template variables. -[2023-07-30 13:36:36] DEBUG - activating featureset >= 0.51 -[2023-07-30 13:36:36] DEBUG - activating featureset >= 0.40 -[2023-07-30 13:36:36] DEBUG - activating featureset >= 0.35 -[2023-07-30 13:36:36] DEBUG - activating featureset >= 0.31 -[2023-07-30 13:36:36] DEBUG - Enabling exec_autodie -[2023-07-30 13:36:36] DEBUG - Turning sudo globally on -[2023-07-30 13:36:36] DEBUG - Creating new distribution class of type: Base -[2023-07-30 13:36:36] DEBUG - new distribution class of type Rex::TaskList::Base created. -[2023-07-30 13:36:36] DEBUG - Creating task: openbsd_cron_test -[2023-07-30 13:36:36] DEBUG - Found Net::OpenSSH and Net::SFTP::Foreign - using it as default -[2023-07-30 13:36:36] DEBUG - Registering task: openbsd_cron_test -[2023-07-30 13:36:36] DEBUG - Initializing Logger from parameters found in Rexfile -[2023-07-30 13:36:36] DEBUG - Returning existing distribution class of type: Rex::TaskList::Base -[2023-07-30 13:36:36] DEBUG - Returning existing distribution class of type: Rex::TaskList::Base -[2023-07-30 13:36:36] DEBUG - Waiting for children to finish -[2023-07-30 13:36:36] INFO - Running task openbsd_cron_test on blowfish.buetow.org:2 -[2023-07-30 13:36:36] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:36] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:36] DEBUG - $VAR1 = ''; - -[2023-07-30 13:36:36] DEBUG - Auth-Information inside Task: -[2023-07-30 13:36:36] DEBUG - password => [[%s]] -[2023-07-30 13:36:36] DEBUG - auth_type => [[try]] -[2023-07-30 13:36:36] DEBUG - public_key => [[]] -[2023-07-30 13:36:36] DEBUG - sudo => [[]] -[2023-07-30 13:36:36] DEBUG - sudo_password => [[**********]] -[2023-07-30 13:36:36] DEBUG - port => [[]] -[2023-07-30 13:36:36] DEBUG - user => [[rex]] -[2023-07-30 13:36:36] DEBUG - private_key => [[]] -[2023-07-30 13:36:36] DEBUG - Using Net::OpenSSH for connection -[2023-07-30 13:36:36] DEBUG - Using user: rex -[2023-07-30 13:36:36] DEBUG - Connecting to blowfish.buetow.org:2 (rex) -[2023-07-30 13:36:36] DEBUG - get_openssh_opt() -[2023-07-30 13:36:36] DEBUG - $VAR1 = {}; - -[2023-07-30 13:36:36] DEBUG - OpenSSH: key_auth or not defined: blowfish.buetow.org:2 - rex -[2023-07-30 13:36:36] DEBUG - OpenSSH options: -[2023-07-30 13:36:36] DEBUG - $VAR1 = [ - 'blowfish.buetow.org', - 'user', - 'rex', - 'port', - '2', - 'master_opts', - [ - '-o', - 'LogLevel=QUIET', - '-o', - 'ConnectTimeout=2' - ], - 'default_ssh_opts', - $VAR1->[6] - ]; - -[2023-07-30 13:36:36] DEBUG - OpenSSH constructor options: -[2023-07-30 13:36:36] DEBUG - $VAR1 = {}; - -[2023-07-30 13:36:36] DEBUG - Trying following auth types: -[2023-07-30 13:36:36] DEBUG - $VAR1 = [ - 'key', - 'pass' - ]; - -[2023-07-30 13:36:36] DEBUG - Current Error-Code: 0 -[2023-07-30 13:36:36] DEBUG - Connected and authenticated to blowfish.buetow.org. -[2023-07-30 13:36:37] DEBUG - Successfully authenticated on blowfish.buetow.org:2. -[2023-07-30 13:36:37] DEBUG - Executing: perl -MFile::Spec -le 'print File::Spec->tmpdir' -[2023-07-30 13:36:37] DEBUG - Detecting shell... -[2023-07-30 13:36:37] DEBUG - Searching for shell: zsh -[2023-07-30 13:36:37] DEBUG - Searching for shell: ksh -[2023-07-30 13:36:37] DEBUG - Found shell and using: ksh -[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:37] DEBUG - $VAR1 = {}; - -[2023-07-30 13:36:37] DEBUG - SSH/executing: LC_ALL=C ; export LC_ALL; perl -MFile::Spec -le 'print File::Spec->tmpdir' -[2023-07-30 13:36:37] DEBUG - /tmp - -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:37] DEBUG - Sudo: Executing: which perl -[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:37] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S', - 'fail_ok' => 1, - 'valid_retval' => [ - 0 - ] - }; - -[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which perl ' -[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which perl ' -[2023-07-30 13:36:37] DEBUG - /usr/bin/perl - -[2023-07-30 13:36:37] DEBUG - Executing openbsd_cron_test -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:37] DEBUG - Sudo: Executing: which lsb_release -[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:37] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S', - 'fail_ok' => 1, - 'valid_retval' => [ - 0 - ] - }; - -[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which lsb_release ' -[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which lsb_release ' -[2023-07-30 13:36:37] DEBUG - ========= ERR ============ -[2023-07-30 13:36:37] DEBUG - which: lsb_release: Command not found. - -[2023-07-30 13:36:37] DEBUG - ========= ERR ============ -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:37] DEBUG - Sudo: Executing: test -d c:/ -[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:37] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d c:/ ' -[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d c:/ ' -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:37] DEBUG - Sudo: Executing: test -e /etc/system-release -[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:37] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/system-release ' -[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/system-release ' -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:37] DEBUG - Sudo: Executing: test -d /etc/system-release -[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:37] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/system-release ' -[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/system-release ' -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:37] DEBUG - Sudo: Executing: test -e /etc/debian_version -[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:37] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/debian_version ' -[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/debian_version ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/debian_version -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/debian_version ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/debian_version ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -e /etc/SuSE-release -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SuSE-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SuSE-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/SuSE-release -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SuSE-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SuSE-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -e /etc/SUSE-brand -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SUSE-brand ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SUSE-brand ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/SUSE-brand -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SUSE-brand ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SUSE-brand ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -e /etc/mageia-release -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/mageia-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/mageia-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/mageia-release -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/mageia-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/mageia-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -e /etc/fedora-release -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/fedora-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/fedora-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/fedora-release -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/fedora-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/fedora-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -e /etc/gentoo-release -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/gentoo-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/gentoo-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/gentoo-release -[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:38] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/gentoo-release ' -[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/gentoo-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -e /etc/altlinux-release -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/altlinux-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/altlinux-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -d /etc/altlinux-release -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/altlinux-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/altlinux-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -e /etc/redhat-release -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/redhat-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/redhat-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -d /etc/redhat-release -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/redhat-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/redhat-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -e /etc/openwrt_release -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/openwrt_release ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/openwrt_release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -d /etc/openwrt_release -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/openwrt_release ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/openwrt_release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -e /etc/arch-release -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/arch-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/arch-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -d /etc/arch-release -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/arch-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/arch-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -e /etc/manjaro-release -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/manjaro-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/manjaro-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -d /etc/manjaro-release -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/manjaro-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/manjaro-release ' -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:39] DEBUG - Sudo: Executing: uname -s -[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:39] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S', - 'fail_ok' => 0, - 'valid_retval' => [ - 0 - ] - }; - -[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; uname -s ' -[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; uname -s ' -[2023-07-30 13:36:40] DEBUG - OpenBSD - -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: which lsb_release -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S', - 'fail_ok' => 1, - 'valid_retval' => [ - 0 - ] - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which lsb_release ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which lsb_release ' -[2023-07-30 13:36:40] DEBUG - ========= ERR ============ -[2023-07-30 13:36:40] DEBUG - which: lsb_release: Command not found. - -[2023-07-30 13:36:40] DEBUG - ========= ERR ============ -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -d c:/ -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d c:/ ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d c:/ ' -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -e /etc/system-release -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/system-release ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/system-release ' -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -d /etc/system-release -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/system-release ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/system-release ' -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -e /etc/debian_version -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/debian_version ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/debian_version ' -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -d /etc/debian_version -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/debian_version ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/debian_version ' -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -e /etc/SuSE-release -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SuSE-release ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SuSE-release ' -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -d /etc/SuSE-release -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SuSE-release ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SuSE-release ' -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -e /etc/SUSE-brand -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SUSE-brand ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SUSE-brand ' -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -d /etc/SUSE-brand -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SUSE-brand ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SUSE-brand ' -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -e /etc/mageia-release -[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:40] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/mageia-release ' -[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/mageia-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -d /etc/mageia-release -[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:41] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/mageia-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/mageia-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -e /etc/fedora-release -[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:41] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/fedora-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/fedora-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -d /etc/fedora-release -[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:41] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/fedora-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/fedora-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -e /etc/gentoo-release -[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:41] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/gentoo-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/gentoo-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -d /etc/gentoo-release -[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:41] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/gentoo-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/gentoo-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -e /etc/altlinux-release -[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:41] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/altlinux-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/altlinux-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -d /etc/altlinux-release -[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:41] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/altlinux-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/altlinux-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -e /etc/redhat-release -[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:41] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/redhat-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/redhat-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -d /etc/redhat-release -[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:41] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/redhat-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/redhat-release ' -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -e /etc/openwrt_release -[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:41] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/openwrt_release ' -[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/openwrt_release ' -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:42] DEBUG - Sudo: Executing: test -d /etc/openwrt_release -[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:42] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/openwrt_release ' -[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/openwrt_release ' -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:42] DEBUG - Sudo: Executing: test -e /etc/arch-release -[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:42] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/arch-release ' -[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/arch-release ' -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:42] DEBUG - Sudo: Executing: test -d /etc/arch-release -[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:42] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/arch-release ' -[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/arch-release ' -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:42] DEBUG - Sudo: Executing: test -e /etc/manjaro-release -[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:42] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/manjaro-release ' -[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/manjaro-release ' -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:42] DEBUG - Sudo: Executing: test -d /etc/manjaro-release -[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:42] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/manjaro-release ' -[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/manjaro-release ' -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:42] DEBUG - Sudo: Executing: uname -s -[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:42] DEBUG - $VAR1 = { - 'valid_retval' => [ - 0 - ], - 'fail_ok' => 0, - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; uname -s ' -[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; uname -s ' -[2023-07-30 13:36:42] DEBUG - OpenBSD - -[2023-07-30 13:36:42] DEBUG - Detecting shell... -[2023-07-30 13:36:42] DEBUG - Found shell in cache: ksh -[2023-07-30 13:36:42] DEBUG - Detecting shell... -[2023-07-30 13:36:42] DEBUG - Found shell in cache: ksh -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:42] DEBUG - Sudo: Executing: perl -e 'print scalar getpwuid($<)' -[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:42] DEBUG - $VAR1 = { - 'fail_ok' => 0, - 'valid_retval' => [ - 0 - ], - 'prepend_command' => 'sudo -p \'\' -S' - }; - -[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; perl -e '\''print scalar getpwuid($<)'\'' ' -[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; perl -e '\''print scalar getpwuid($<)'\'' ' -[2023-07-30 13:36:42] DEBUG - root -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning -[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning -[2023-07-30 13:36:42] DEBUG - Sudo: Executing: ( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null ; cat /tmp/umkmfvxctxjg.tmp ; rm /tmp/umkmfvxctxjg.tmp -[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: -[2023-07-30 13:36:42] DEBUG - $VAR1 = { - 'prepend_command' => 'sudo -p \'\' -S', - 'valid_retval' => [ - 0 - ], - 'fail_ok' => 0 - }; - -[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; ( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null ; cat /tmp/umkmfvxctxjg.tmp ; rm /tmp/umkmfvxctxjg.tmp ' -[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; ( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null ; cat /tmp/umkmfvxctxjg.tmp ; rm /tmp/umkmfvxctxjg.tmp ' -[2023-07-30 13:36:42] DEBUG - ========= ERR ============ -[2023-07-30 13:36:42] DEBUG - sh: >&/dev/null : illegal file descriptor name -cat: /tmp/umkmfvxctxjg.tmp: No such file or directory -rm: /tmp/umkmfvxctxjg.tmp: No such file or directory - -[2023-07-30 13:36:42] DEBUG - ========= ERR ============ -[2023-07-30 13:36:42] DEBUG - Error executing `( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null ; cat /tmp/umkmfvxctxjg.tmp ; rm /tmp/umkmfvxctxjg.tmp`: -[2023-07-30 13:36:42] DEBUG - STDOUT: -[2023-07-30 13:36:42] DEBUG - -[2023-07-30 13:36:42] DEBUG - STDERR: -[2023-07-30 13:36:42] DEBUG - sh: >&/dev/null : illegal file descriptor name -cat: /tmp/umkmfvxctxjg.tmp: No such file or directory -rm: /tmp/umkmfvxctxjg.tmp: No such file or directory -[2023-07-30 13:36:42] ERROR - Error executing task: -[2023-07-30 13:36:42] ERROR - Error during `i_run` at /usr/share/perl5/vendor_perl/Rex/Helper/Run.pm line 120, <ARGV> line 8. - Rex::Helper::Run::i_run("( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null"...) called at /usr/share/perl5/vendor_perl/Rex/Cron/FreeBSD.pm line 38 - Rex::Cron::FreeBSD::read_user_cron(Rex::Cron::FreeBSD=HASH(0x5603c05187c0), "_gogios") called at /usr/share/perl5/vendor_perl/Rex/Commands/Cron.pm line 224 - Rex::Commands::Cron::cron("add", "_gogios", HASH(0x5603bfff6048)) called at /loader/0x5603bedbd710/__Rexfile__.pm line 15 - Rex::CLI::__ANON__(HASH(0x5603bfa6efe0), ARRAY(0x5603bfa6f130)) called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 59 - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 41 - Rex::Interface::Executor::Default::exec(Rex::Interface::Executor::Default=HASH(0x5603bfa81380), HASH(0x5603bfa6efe0), ARRAY(0x5603bfa6f130)) called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 880 - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 884 - Rex::Task::run(Rex::Task=HASH(0x5603bfa81080), Rex::Group::Entry::Server=HASH(0x5603bfa6f460), "in_transaction", 0, "params", undef, "args", undef) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 340 - eval {...} called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 334 - Rex::TaskList::Base::__ANON__(Rex::Fork::Task=HASH(0x5603bfa6f430)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Task.pm line 32 - Rex::Fork::Task::start(Rex::Fork::Task=HASH(0x5603bfa6f430)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Manager.pm line 35 - Rex::Fork::Manager::add(Rex::Fork::Manager=HASH(0x5603befb5748), CODE(0x5603be7912d0)) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 316 - Rex::TaskList::Base::run(Rex::TaskList::Base=HASH(0x5603bfa80e10), Rex::Task=HASH(0x5603bfa813e0)) called at /usr/share/perl5/vendor_perl/Rex/TaskList.pm line 61 - Rex::TaskList::run("Rex::TaskList", Rex::Task=HASH(0x5603bfa813e0)) called at /usr/share/perl5/vendor_perl/Rex/RunList.pm line 67 - Rex::RunList::run_tasks(Rex::RunList=HASH(0x5603bf0cad90)) called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 - eval {...} called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 - Rex::CLI::__run__(Rex::CLI=HASH(0x5603be6594e8)) called at /usr/bin/rex line 22 - -[2023-07-30 13:36:42] DEBUG - Destroying all cached os information -[2023-07-30 13:36:43] DEBUG - Need to reinitialize connections. -[2023-07-30 13:36:43] DEBUG - Returning existing distribution class of type: Rex::TaskList::Base -[2023-07-30 13:36:43] ERROR - 1 out of 1 task(s) failed: -[2023-07-30 13:36:43] ERROR - openbsd_cron_test failed on blowfish.buetow.org:2 -[2023-07-30 13:36:43] ERROR - Error during `i_run` at /usr/share/perl5/vendor_perl/Rex/Helper/Run.pm line 120, <ARGV> line 8. -[2023-07-30 13:36:43] ERROR - Rex::Helper::Run::i_run("( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null"...) called at /usr/share/perl5/vendor_perl/Rex/Cron/FreeBSD.pm line 38 -[2023-07-30 13:36:43] ERROR - Rex::Cron::FreeBSD::read_user_cron(Rex::Cron::FreeBSD=HASH(0x5603c05187c0), "_gogios") called at /usr/share/perl5/vendor_perl/Rex/Commands/Cron.pm line 224 -[2023-07-30 13:36:43] ERROR - Rex::Commands::Cron::cron("add", "_gogios", HASH(0x5603bfff6048)) called at /loader/0x5603bedbd710/__Rexfile__.pm line 15 -[2023-07-30 13:36:43] ERROR - Rex::CLI::__ANON__(HASH(0x5603bfa6efe0), ARRAY(0x5603bfa6f130)) called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 59 -[2023-07-30 13:36:43] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 41 -[2023-07-30 13:36:43] ERROR - Rex::Interface::Executor::Default::exec(Rex::Interface::Executor::Default=HASH(0x5603bfa81380), HASH(0x5603bfa6efe0), ARRAY(0x5603bfa6f130)) called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 880 -[2023-07-30 13:36:43] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 884 -[2023-07-30 13:36:43] ERROR - Rex::Task::run(Rex::Task=HASH(0x5603bfa81080), Rex::Group::Entry::Server=HASH(0x5603bfa6f460), "in_transaction", 0, "params", undef, "args", undef) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 340 -[2023-07-30 13:36:43] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 334 -[2023-07-30 13:36:43] ERROR - Rex::TaskList::Base::__ANON__(Rex::Fork::Task=HASH(0x5603bfa6f430)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Task.pm line 32 -[2023-07-30 13:36:43] ERROR - Rex::Fork::Task::start(Rex::Fork::Task=HASH(0x5603bfa6f430)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Manager.pm line 35 -[2023-07-30 13:36:43] ERROR - Rex::Fork::Manager::add(Rex::Fork::Manager=HASH(0x5603befb5748), CODE(0x5603be7912d0)) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 316 -[2023-07-30 13:36:43] ERROR - Rex::TaskList::Base::run(Rex::TaskList::Base=HASH(0x5603bfa80e10), Rex::Task=HASH(0x5603bfa813e0)) called at /usr/share/perl5/vendor_perl/Rex/TaskList.pm line 61 -[2023-07-30 13:36:43] ERROR - Rex::TaskList::run("Rex::TaskList", Rex::Task=HASH(0x5603bfa813e0)) called at /usr/share/perl5/vendor_perl/Rex/RunList.pm line 67 -[2023-07-30 13:36:43] ERROR - Rex::RunList::run_tasks(Rex::RunList=HASH(0x5603bf0cad90)) called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 -[2023-07-30 13:36:43] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 -[2023-07-30 13:36:43] ERROR - Rex::CLI::__run__(Rex::CLI=HASH(0x5603be6594e8)) called at /usr/bin/rex line 22 -[2023-07-30 13:36:43] DEBUG - Removing lockfile -[2023-07-30 13:36:43] DEBUG - Returning existing distribution class of type: Rex::TaskList::Base diff --git a/gemfeed/examples/conf/playground/openbsd_cron_test.txt b/gemfeed/examples/conf/playground/openbsd_cron_test.txt deleted file mode 100644 index fdeca282..00000000 --- a/gemfeed/examples/conf/playground/openbsd_cron_test.txt +++ /dev/null @@ -1,42 +0,0 @@ -[paul@earth]~/git/rexfiles/testing% rex -m openbsd_cron_test &> openbsd_cron_test.txt -[2023-07-30 13:36:19] INFO - Running task openbsd_cron_test on blowfish.buetow.org:2 -[2023-07-30 13:36:27] ERROR - Error executing task: -[2023-07-30 13:36:27] ERROR - Error during `i_run` at /usr/share/perl5/vendor_perl/Rex/Helper/Run.pm line 120, <ARGV> line 8. - Rex::Helper::Run::i_run("( crontab -l -u _gogios >/tmp/johvumpjmtuo.tmp ) >& /dev/null"...) called at /usr/share/perl5/vendor_perl/Rex/Cron/FreeBSD.pm line 38 - Rex::Cron::FreeBSD::read_user_cron(Rex::Cron::FreeBSD=HASH(0x55f31eb606b0), "_gogios") called at /usr/share/perl5/vendor_perl/Rex/Commands/Cron.pm line 224 - Rex::Commands::Cron::cron("add", "_gogios", HASH(0x55f31e7a4198)) called at /loader/0x55f31d3e79c8/__Rexfile__.pm line 15 - Rex::CLI::__ANON__(HASH(0x55f31e795d60), ARRAY(0x55f31e7889c0)) called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 59 - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 41 - Rex::Interface::Executor::Default::exec(Rex::Interface::Executor::Default=HASH(0x55f31e0731c0), HASH(0x55f31e795d60), ARRAY(0x55f31e7889c0)) called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 880 - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 884 - Rex::Task::run(Rex::Task=HASH(0x55f31e795bf8), Rex::Group::Entry::Server=HASH(0x55f31ccb1010), "in_transaction", 0, "params", undef, "args", undef) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 340 - eval {...} called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 334 - Rex::TaskList::Base::__ANON__(Rex::Fork::Task=HASH(0x55f31db4b820)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Task.pm line 32 - Rex::Fork::Task::start(Rex::Fork::Task=HASH(0x55f31db4b820)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Manager.pm line 35 - Rex::Fork::Manager::add(Rex::Fork::Manager=HASH(0x55f31ccbf6c8), CODE(0x55f31ccbf6f8)) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 316 - Rex::TaskList::Base::run(Rex::TaskList::Base=HASH(0x55f31e072ed8), Rex::Task=HASH(0x55f31e72a460)) called at /usr/share/perl5/vendor_perl/Rex/TaskList.pm line 61 - Rex::TaskList::run("Rex::TaskList", Rex::Task=HASH(0x55f31e72a460)) called at /usr/share/perl5/vendor_perl/Rex/RunList.pm line 67 - Rex::RunList::run_tasks(Rex::RunList=HASH(0x55f31d6f6308)) called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 - eval {...} called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 - Rex::CLI::__run__(Rex::CLI=HASH(0x55f31cc844e8)) called at /usr/bin/rex line 22 - -[2023-07-30 13:36:27] ERROR - 1 out of 1 task(s) failed: -[2023-07-30 13:36:27] ERROR - openbsd_cron_test failed on blowfish.buetow.org:2 -[2023-07-30 13:36:27] ERROR - Error during `i_run` at /usr/share/perl5/vendor_perl/Rex/Helper/Run.pm line 120, <ARGV> line 8. -[2023-07-30 13:36:27] ERROR - Rex::Helper::Run::i_run("( crontab -l -u _gogios >/tmp/johvumpjmtuo.tmp ) >& /dev/null"...) called at /usr/share/perl5/vendor_perl/Rex/Cron/FreeBSD.pm line 38 -[2023-07-30 13:36:27] ERROR - Rex::Cron::FreeBSD::read_user_cron(Rex::Cron::FreeBSD=HASH(0x55f31eb606b0), "_gogios") called at /usr/share/perl5/vendor_perl/Rex/Commands/Cron.pm line 224 -[2023-07-30 13:36:27] ERROR - Rex::Commands::Cron::cron("add", "_gogios", HASH(0x55f31e7a4198)) called at /loader/0x55f31d3e79c8/__Rexfile__.pm line 15 -[2023-07-30 13:36:27] ERROR - Rex::CLI::__ANON__(HASH(0x55f31e795d60), ARRAY(0x55f31e7889c0)) called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 59 -[2023-07-30 13:36:27] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 41 -[2023-07-30 13:36:27] ERROR - Rex::Interface::Executor::Default::exec(Rex::Interface::Executor::Default=HASH(0x55f31e0731c0), HASH(0x55f31e795d60), ARRAY(0x55f31e7889c0)) called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 880 -[2023-07-30 13:36:27] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 884 -[2023-07-30 13:36:27] ERROR - Rex::Task::run(Rex::Task=HASH(0x55f31e795bf8), Rex::Group::Entry::Server=HASH(0x55f31ccb1010), "in_transaction", 0, "params", undef, "args", undef) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 340 -[2023-07-30 13:36:27] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 334 -[2023-07-30 13:36:27] ERROR - Rex::TaskList::Base::__ANON__(Rex::Fork::Task=HASH(0x55f31db4b820)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Task.pm line 32 -[2023-07-30 13:36:27] ERROR - Rex::Fork::Task::start(Rex::Fork::Task=HASH(0x55f31db4b820)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Manager.pm line 35 -[2023-07-30 13:36:27] ERROR - Rex::Fork::Manager::add(Rex::Fork::Manager=HASH(0x55f31ccbf6c8), CODE(0x55f31ccbf6f8)) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 316 -[2023-07-30 13:36:27] ERROR - Rex::TaskList::Base::run(Rex::TaskList::Base=HASH(0x55f31e072ed8), Rex::Task=HASH(0x55f31e72a460)) called at /usr/share/perl5/vendor_perl/Rex/TaskList.pm line 61 -[2023-07-30 13:36:27] ERROR - Rex::TaskList::run("Rex::TaskList", Rex::Task=HASH(0x55f31e72a460)) called at /usr/share/perl5/vendor_perl/Rex/RunList.pm line 67 -[2023-07-30 13:36:27] ERROR - Rex::RunList::run_tasks(Rex::RunList=HASH(0x55f31d6f6308)) called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 -[2023-07-30 13:36:27] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 -[2023-07-30 13:36:27] ERROR - Rex::CLI::__run__(Rex::CLI=HASH(0x55f31cc844e8)) called at /usr/bin/rex line 22 @@ -13,7 +13,7 @@ </p> <h1 style='display: inline' id='hello'>Hello!</h1><br /> <br /> -<span class='quote'>This site was generated at 2025-10-02T11:27:20+03:00 by <span class='inlinecode'>Gemtexter</span></span><br /> +<span class='quote'>This site was generated at 2025-10-02T11:30:14+03:00 by <span class='inlinecode'>Gemtexter</span></span><br /> <br /> <span>Welcome to the foo.zone!</span><br /> <br /> diff --git a/uptime-stats.html b/uptime-stats.html index 0bfa741b..24dd6bfd 100644 --- a/uptime-stats.html +++ b/uptime-stats.html @@ -13,7 +13,7 @@ </p> <h1 style='display: inline' id='my-machine-uptime-stats'>My machine uptime stats</h1><br /> <br /> -<span class='quote'>This site was last updated at 2025-10-02T11:27:20+03:00</span><br /> +<span class='quote'>This site was last updated at 2025-10-02T11:30:14+03:00</span><br /> <br /> <span>The following stats were collected via <span class='inlinecode'>uptimed</span> on all of my personal computers over many years and the output was generated by <span class='inlinecode'>guprecords</span>, the global uptime records stats analyser of mine.</span><br /> <br /> |
