diff options
201 files changed, 12453 insertions, 1204 deletions
diff --git a/about/resources.html b/about/resources.html index 00b03532..b31929e3 100644 --- a/about/resources.html +++ b/about/resources.html @@ -50,53 +50,53 @@ <span>In random order:</span><br /> <br /> <ul> -<li>The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton</li> +<li>21st Century C: C Tips from the New School; Ben Klemens; O'Reilly</li> +<li>The Pragmatic Programmer; David Thomas; Addison-Wesley</li> +<li>Ultimate Go Notebook; Bill Kennedy</li> +<li>97 things every SRE should know; Emil Stolarsky, Jaime Woo; O'Reilly</li> <li>C++ Programming Language; Bjarne Stroustrup;</li> -<li>DNS and BIND; Cricket Liu; O'Reilly</li> -<li>Site Reliability Engineering; How Google runs production systems; O'Reilly</li> -<li>The Kubernetes Book; Nigel Poulton; Unabridged Audiobook</li> -<li>Higher Order Perl; Mark Dominus; Morgan Kaufmann</li> +<li>Modern Perl; Chromatic ; Onyx Neon Press</li> <li>Funktionale Programmierung; Peter Pepper; Springer</li> -<li>Distributed Systems: Principles and Paradigms; Andrew S. Tanenbaum; Pearson</li> -<li>Developing Games in Java; David Brackeen and others...; New Riders</li> -<li>Raku Fundamentals; Moritz Lenz; Apress</li> <li>DevOps And Site Reliability Engineering Handbook; Stephen Fleming; Audible</li> -<li>Terraform Cookbook; Mikael Krief; Packt Publishing</li> -<li>Effective Java; Joshua Bloch; Addison-Wesley Professional</li> -<li>Perl New Features; Joshua McAdams, brian d foy; Perl School</li> +<li>Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt </li> <li>The Docker Book; James Turnbull; Kindle</li> -<li>The Pragmatic Programmer; David Thomas; Addison-Wesley</li> -<li>Chaos Engineering - System Resiliency in Practice; Casey Rosenthal and Nora Jones; eBook</li> -<li>Raku Recipes; J.J. Merelo; Apress</li> -<li>100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications</li> -<li>The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional</li> -<li>97 things every SRE should know; Emil Stolarsky, Jaime Woo; O'Reilly</li> +<li>Distributed Systems: Principles and Paradigms; Andrew S. Tanenbaum; Pearson</li> +<li>The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton</li> <li>The DevOps Handbook; Gene Kim, Jez Humble, Patrick Debois, John Willis; Audible</li> -<li>Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O'Reilly</li> -<li>Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers</li> -<li>The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress</li> <li>Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications</li> -<li>Data Science at the Command Line; Jeroen Janssens; O'Reilly</li> -<li>21st Century C: C Tips from the New School; Ben Klemens; O'Reilly</li> -<li>Kubernetes Cookbook; Sameer Naik, Sébastien Goasguen, Jonathan Michaux; O'Reilly</li> -<li>Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall & Jon Orwant; O'Reilly</li> <li>Effective awk programming; Arnold Robbins; O'Reilly</li> -<li>Concurrency in Go; Katherine Cox-Buday; O'Reilly</li> -<li>Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner</li> -<li>Modern Perl; Chromatic ; Onyx Neon Press</li> <li>Java ist auch eine Insel; Christian Ullenboom; </li> -<li>Leanring eBPF; Liz Rice; O'Reilly</li> -<li>Tmux 2: Productive Mouse-free Development; Brain P. Hogan; The Pragmatic Programmers </li> +<li>The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress</li> +<li>Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall & Jon Orwant; O'Reilly</li> +<li>Site Reliability Engineering; How Google runs production systems; O'Reilly</li> +<li>Higher Order Perl; Mark Dominus; Morgan Kaufmann</li> <li>Polished Ruby Programming; Jeremy Evans; Packt Publishing</li> -<li>Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press</li> +<li>Leanring eBPF; Liz Rice; O'Reilly</li> +<li>Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly</li> <li>Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O'Reilly</li> -<li>Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press</li> +<li>Data Science at the Command Line; Jeroen Janssens; O'Reilly</li> +<li>Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers</li> +<li>Programming Ruby 3.3 (5th Edition); Noel Rappin, with Dave Thomas; The Pragmatic Bookshelf</li> +<li>Kubernetes Cookbook; Sameer Naik, Sébastien Goasguen, Jonathan Michaux; O'Reilly</li> +<li>Chaos Engineering - System Resiliency in Practice; Casey Rosenthal and Nora Jones; eBook</li> +<li>100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications</li> +<li>Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O'Reilly</li> +<li>Raku Fundamentals; Moritz Lenz; Apress</li> +<li>Terraform Cookbook; Mikael Krief; Packt Publishing</li> +<li>Tmux 2: Productive Mouse-free Development; Brain P. Hogan; The Pragmatic Programmers </li> <li>Systemprogrammierung in Go; Frank Müller; dpunkt</li> -<li>Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt </li> +<li>The Kubernetes Book; Nigel Poulton; Unabridged Audiobook</li> +<li>DNS and BIND; Cricket Liu; O'Reilly</li> <li>Pro Puppet; James Turnbull, Jeffrey McCune; Apress</li> -<li>Programming Ruby 3.3 (5th Edition); Noel Rappin, with Dave Thomas; The Pragmatic Bookshelf</li> -<li>Ultimate Go Notebook; Bill Kennedy</li> -<li>Clusterbau mit Linux-HA; Michael Schwartzkopff; O'Reilly</li> +<li>Developing Games in Java; David Brackeen and others...; New Riders</li> +<li>Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press</li> +<li>The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional</li> +<li>Raku Recipes; J.J. Merelo; Apress</li> +<li>Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press</li> +<li>Perl New Features; Joshua McAdams, brian d foy; Perl School</li> +<li>Effective Java; Joshua Bloch; Addison-Wesley Professional</li> +<li>Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner</li> +<li>Concurrency in Go; Katherine Cox-Buday; O'Reilly</li> </ul><br /> <h2 style='display: inline' id='technical-references'>Technical references</h2><br /> <br /> @@ -104,55 +104,55 @@ <br /> <ul> <li>BPF Performance Tools - Linux System and Application Observability, Brendan Gregg; Addison Wesley</li> +<li>Relayd and Httpd Mastery; Michael W Lucas</li> +<li>Implementing Service Level Objectives; Alex Hidalgo; O'Reilly</li> +<li>Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O'Reilly</li> <li>The Linux Programming Interface; Michael Kerrisk; No Starch Press </li> +<li>Algorithms; Robert Sedgewick, Kevin Wayne; Addison Wesley</li> <li>Go: Design Patterns for Real-World Projects; Mat Ryer; Packt</li> <li>Groovy Kurz & Gut; Joerg Staudemeier; O'Reilly</li> -<li>Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O'Reilly</li> -<li>Relayd and Httpd Mastery; Michael W Lucas</li> -<li>Algorithms; Robert Sedgewick, Kevin Wayne; Addison Wesley</li> -<li>Implementing Service Level Objectives; Alex Hidalgo; O'Reilly</li> </ul><br /> <h2 style='display: inline' id='self-development-and-soft-skills-books'>Self-development and soft-skills books</h2><br /> <br /> <span>In random order:</span><br /> <br /> <ul> -<li>Ultralearning; Scott Young; Thorsons</li> -<li>Getting Things Done; David Allen</li> -<li>97 Things Every Engineering Manager Should Know; Camille Fournier; Audiobook</li> -<li>Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)</li> -<li>The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books</li> <li>Stop starting, start finishing; Arne Roock; Lean-Kanban University </li> -<li>Atomic Habits; James Clear; Random House Business</li> +<li>97 Things Every Engineering Manager Should Know; Camille Fournier; Audiobook</li> +<li>Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing</li> <li>Never Split the Difference; Chris Voss, Tahl Raz; Random House Business</li> -<li>The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK</li> -<li>So Good They Can't Ignore You; Cal Newport; Business Plus</li> -<li>Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion</li> +<li>Staff Engineer: Leadership beyond the management track; Will Larson; Audiobook</li> +<li>101 Essays that change the way you think; Brianna Wiest; Audiobook</li> +<li>Eat That Frog; Brian Tracy</li> <li>Influence without Authority; A. Cohen, D. Bradford; Wiley</li> -<li>The Complete Software Developer's Career Guide; John Sonmez; Unabridged Audiobook</li> -<li>Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing</li> -<li>The Obstacle Is The Way; Ryan Holiday; Profile Books Ltd</li> +<li>Coders at Work - Reflections on the craft of programming, Peter Seibel and Mitchell Dorian et al., Audiobook</li> +<li>Psycho-Cybernetics; Maxwell Maltz; Perigee Books</li> +<li>Atomic Habits; James Clear; Random House Business</li> +<li>The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)</li> +<li>The Bullet Journal Method; Ryder Carroll; Fourth Estate</li> <li>Eat That Frog!; Brian Tracy; Hodder Paperbacks</li> -<li>101 Essays that change the way you think; Brianna Wiest; Audiobook</li> <li>Slow Productivity; Cal Newport; Penguin Random House</li> -<li>Staff Engineer: Leadership beyond the management track; Will Larson; Audiobook</li> -<li>Coders at Work - Reflections on the craft of programming, Peter Seibel and Mitchell Dorian et al., Audiobook</li> +<li>Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)</li> +<li>The Obstacle Is The Way; Ryan Holiday; Profile Books Ltd</li> +<li>Ultralearning; Scott Young; Thorsons</li> +<li>The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon & Schuster UK</li> +<li>Digital Minimalism; Cal Newport; Portofolio Penguin</li> +<li>Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion</li> +<li>Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne</li> +<li>Meditation for Mortals, Oliver Burkeman, Audiobook</li> +<li>Getting Things Done; David Allen</li> +<li>The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books</li> +<li>The Good Enough Job; Simone Stolzoff; Ebury Edge</li> <li>The Power of Now; Eckhard Tolle; Yellow Kite</li> -<li>Soft Skills; John Sommez; Manning Publications</li> -<li>The Joy of Missing Out; Christina Crook; New Society Publishers</li> -<li>Eat That Frog; Brian Tracy</li> +<li>The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select</li> <li>Deep Work; Cal Newport; Piatkus</li> -<li>The Bullet Journal Method; Ryder Carroll; Fourth Estate</li> -<li>The Good Enough Job; Simone Stolzoff; Ebury Edge</li> +<li>So Good They Can't Ignore You; Cal Newport; Business Plus</li> +<li>Soft Skills; John Sommez; Manning Publications</li> <li>Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press</li> -<li>Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne</li> -<li>The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)</li> -<li>The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select</li> -<li>Digital Minimalism; Cal Newport; Portofolio Penguin</li> -<li>Ultralearning; Anna Laurent; Self-published via Amazon</li> +<li>The Joy of Missing Out; Christina Crook; New Society Publishers</li> <li>Time Management for System Administrators; Thomas A. Limoncelli; O'Reilly</li> -<li>Psycho-Cybernetics; Maxwell Maltz; Perigee Books</li> -<li>Meditation for Mortals, Oliver Burkeman, Audiobook</li> +<li>The Complete Software Developer's Career Guide; John Sonmez; Unabridged Audiobook</li> +<li>Ultralearning; Anna Laurent; Self-published via Amazon</li> </ul><br /> <a class='textlink' href='../notes/index.html'>Here are notes of mine for some of the books</a><br /> <br /> @@ -162,21 +162,21 @@ <br /> <ul> <li>Cloud Operations on AWS - Learn how to configure, deploy, maintain, and troubleshoot your AWS environments; 3-day online live training with labs; Amazon</li> -<li>F5 Loadbalancers Training; 2-day on-site training; F5, Inc. </li> -<li>Functional programming lecture; Remote University of Hagen</li> -<li>The Ultimate Kubernetes Bootcamp; School of Devops; O'Reilly Online</li> -<li>Developing IaC with Terraform (with Live Lessons); O'Reilly Online</li> +<li>Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online</li> <li>MySQL Deep Dive Workshop; 2-day on-site training</li> -<li>The Well-Grounded Rubyist Video Edition; David. A. Black; O'Reilly Online</li> -<li>Structure and Interpretation of Computer Programs; Harold Abelson and more...; </li> <li>Red Hat Certified System Administrator; Course + certification (Although I had the option, I decided not to take the next course as it is more effective to self learn what I need)</li> +<li>The Well-Grounded Rubyist Video Edition; David. A. Black; O'Reilly Online</li> +<li>F5 Loadbalancers Training; 2-day on-site training; F5, Inc. </li> +<li>Developing IaC with Terraform (with Live Lessons); O'Reilly Online</li> +<li>Scripting Vim; Damian Conway; O'Reilly Online</li> <li>AWS Immersion Day; Amazon; 1-day interactive online training </li> -<li>Ultimate Go Programming; Bill Kennedy; O'Reilly Online</li> +<li>Structure and Interpretation of Computer Programs; Harold Abelson and more...; </li> <li>Apache Tomcat Best Practises; 3-day on-site training</li> -<li>Protocol buffers; O'Reilly Online</li> +<li>The Ultimate Kubernetes Bootcamp; School of Devops; O'Reilly Online</li> <li>Linux Security and Isolation APIs Training; Michael Kerrisk; 3-day on-site training</li> -<li>Algorithms Video Lectures; Robert Sedgewick; O'Reilly Online</li> -<li>Scripting Vim; Damian Conway; O'Reilly Online</li> +<li>Protocol buffers; O'Reilly Online</li> +<li>Ultimate Go Programming; Bill Kennedy; O'Reilly Online</li> +<li>Functional programming lecture; Remote University of Hagen</li> </ul><br /> <h2 style='display: inline' id='technical-guides'>Technical guides</h2><br /> <br /> @@ -194,20 +194,20 @@ <span>In random order:</span><br /> <br /> <ul> +<li>Wednesday Wisdom</li> +<li>Fork Around And Find Out</li> <li>The ProdCast (Google SRE Podcast)</li> -<li>Backend Banter</li> <li>The Changelog Podcast(s)</li> +<li>Modern Mentor</li> +<li>Backend Banter</li> +<li>Pratical AI</li> <li>Maintainable</li> <li>Hidden Brain</li> -<li>Deep Questions with Cal Newport</li> -<li>Wednesday Wisdom</li> -<li>Modern Mentor</li> <li>BSD Now [BSD]</li> -<li>Fork Around And Find Out</li> <li>Fallthrough [Golang]</li> <li>Dev Interrupted</li> +<li>Deep Questions with Cal Newport</li> <li>The Pragmatic Engineer Podcast</li> -<li>Pratical AI</li> <li>Cup o' Go [Golang]</li> </ul><br /> <h3 style='display: inline' id='podcasts-i-liked'>Podcasts I liked</h3><br /> @@ -217,28 +217,28 @@ <ul> <li>Go Time (predecessor of fallthrough)</li> <li>CRE: Chaosradio Express [german]</li> -<li>Java Pub House</li> <li>Ship It (predecessor of Fork Around And Find Out)</li> -<li>Modern Mentor</li> +<li>Java Pub House</li> <li>FLOSS weekly</li> +<li>Modern Mentor</li> </ul><br /> <h2 style='display: inline' id='newsletters-i-like'>Newsletters I like</h2><br /> <br /> <span>This is a mix of tech and non-tech newsletters I am subscribed to. In random order:</span><br /> <br /> <ul> -<li>Andreas Brandhorst Newsletter (Sci-Fi author)</li> -<li>The Imperfectionist</li> -<li>Register Spill</li> -<li>Applied Go Weekly Newsletter</li> -<li>VK Newsletter</li> -<li>Golang Weekly</li> <li>Monospace Mentor</li> -<li>The Valuable Dev</li> -<li>Changelog News</li> -<li>The Pragmatic Engineer</li> +<li>Golang Weekly</li> +<li>The Imperfectionist</li> +<li>Andreas Brandhorst Newsletter (Sci-Fi author)</li> <li>Ruby Weekly</li> +<li>Changelog News</li> +<li>VK Newsletter</li> <li>byteSizeGo</li> +<li>The Pragmatic Engineer</li> +<li>The Valuable Dev</li> +<li>Register Spill</li> +<li>Applied Go Weekly Newsletter</li> </ul><br /> <h2 style='display: inline' id='magazines-i-liked'>Magazines I like(d)</h2><br /> <br /> @@ -246,8 +246,8 @@ <br /> <ul> <li>LWN (online only)</li> -<li>Linux User</li> <li>freeX (not published anymore)</li> +<li>Linux User</li> <li>Linux Magazine</li> </ul><br /> <h1 style='display: inline' id='formal-education'>Formal education</h1><br /> diff --git a/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html b/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html index a5ab888b..a9d99848 100644 --- a/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html +++ b/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html @@ -413,6 +413,7 @@ Notice: Finished catalog run in 206.09 seconds <br /> <span>Other *BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> diff --git a/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.html b/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.html index a6efd632..468e3290 100644 --- a/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.html +++ b/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.html @@ -692,6 +692,7 @@ rex commons <br /> <span>Other *BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> diff --git a/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.html b/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.html index 9981ab4b..2e8455aa 100644 --- a/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.html +++ b/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.html @@ -72,6 +72,7 @@ $ doas reboot <i><font color="silver"># Just in case, reboot one more time</font <br /> <span>Other *BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> diff --git a/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html b/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html index 29d0bfdc..e7d38b05 100644 --- a/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html +++ b/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html @@ -331,6 +331,7 @@ http://www.gnu.org/software/src-highlite --> <br /> <span>Other *BSD and KISS related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> diff --git a/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html b/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html index cff8f979..6be362b0 100644 --- a/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html +++ b/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html @@ -27,6 +27,7 @@ <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -178,6 +179,7 @@ <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> diff --git a/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html b/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html index a8f91730..62b03900 100644 --- a/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html +++ b/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html @@ -2,7 +2,7 @@ <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> -<title>Deciding on the hardware</title> +<title>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</title> <link rel="shortcut icon" type="image/gif" href="/favicon.ico" /> <link rel="stylesheet" href="../style.css" /> <link rel="stylesheet" href="style-override.css" /> @@ -11,7 +11,7 @@ <p class="header"> <a href="https://foo.zone">Home</a> | <a href="https://codeberg.org/snonux/foo.zone/src/branch/content-md/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.md">Markdown</a> | <a href="gemini://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.gmi">Gemini</a> </p> -<span> f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</span><br /> +<h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-2-hardware-and-base-installation'>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</h1><br /> <br /> <span class='quote'>Published at 2024-12-02T23:48:21+02:00</span><br /> <br /> @@ -27,6 +27,7 @@ <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -37,6 +38,7 @@ <h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> <br /> <ul> +<li><a href='#f3s-kubernetes-with-freebsd---part-2-hardware-and-base-installation'>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a></li> <li><a href='#deciding-on-the-hardware'>Deciding on the hardware</a></li> <li>⇢ <a href='#not-arm-but-intel-n100-'>Not ARM but Intel N100 </a></li> <li>⇢ <a href='#beelink-unboxing'>Beelink unboxing</a></li> @@ -358,6 +360,7 @@ dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font> <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> diff --git a/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html b/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html index 0cd268b0..9f46ecd8 100644 --- a/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html +++ b/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html @@ -23,6 +23,7 @@ <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -599,6 +600,7 @@ Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color= <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)</a><br /> diff --git a/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.html b/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.html index 2c83ca03..3defacc1 100644 --- a/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.html +++ b/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.html @@ -27,6 +27,7 @@ <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network (You are currently reading this)</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -1007,6 +1008,7 @@ peer: 2htXdNcxzpI2FdPDJy4T4VGtm1wpMEQu1AkQHjNY6F8= <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network (You are currently reading this)</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> diff --git a/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html b/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html index 5d59b5cc..9af59a5d 100644 --- a/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html +++ b/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html @@ -23,6 +23,7 @@ <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage (You are currently reading this)</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -826,7 +827,7 @@ ifconfig_re0_alias0=<font color="#808080">"inet vhid 1 pass testpass alias 192.1 <span>Next, update <span class='inlinecode'>/etc/hosts</span> on all nodes (<span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, <span class='inlinecode'>f2</span>, <span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, <span class='inlinecode'>r2</span>) to resolve the VIP hostname:</span><br /> <br /> <pre> -192.168.1.138 f3s-storage-ha f3s-storage-ha.lan f3s-storage-ha.lan.buetow.org +192.168.2.138 f3s-storage-ha f3s-storage-ha.wg0 f3s-storage-ha.wg0.wan.buetow.org </pre> <br /> <span>This allows clients to connect to <span class='inlinecode'>f3s-storage-ha</span> regardless of which physical server is currently the MASTER.</span><br /> @@ -1582,7 +1583,7 @@ http://www.gnu.org/software/src-highlite --> clientaddr=<font color="#000000">127.0</font>.<font color="#000000">0.1</font>,local_lock=none,addr=<font color="#000000">127.0</font>.<font color="#000000">0.1</font>) <i><font color="silver"># For persistent mount, add to /etc/fstab:</font></i> -<font color="#000000">127.0</font>.<font color="#000000">0.1</font>:/data/nfs/k3svolumes /data/nfs/k3svolumes nfs4 port=<font color="#000000">2323</font>,_netdev <font color="#000000">0</font> <font color="#000000">0</font> +<font color="#000000">127.0</font>.<font color="#000000">0.1</font>:/k3svolumes /data/nfs/k3svolumes nfs4 port=<font color="#000000">2323</font>,_netdev,soft,timeo=<font color="#000000">10</font>,retrans=<font color="#000000">2</font>,intr <font color="#000000">0</font> <font color="#000000">0</font> </pre> <br /> <span>Note: The mount uses localhost (<span class='inlinecode'>127.0.0.1</span>) because stunnel is listening locally and forwarding the encrypted traffic to the remote server.</span><br /> @@ -1860,10 +1861,13 @@ Jul <font color="#000000">06</font> <font color="#000000">10</font>:<font color= <span>Both technologies could run on top of our encrypted ZFS volumes, combining ZFS's data integrity and encryption features with distributed storage capabilities. This would be particularly interesting for workloads that need either S3-compatible APIs (MinIO) or transparent distributed POSIX storage (MooseFS). What about Ceph and GlusterFS? Unfortunately, there doesn't seem to be great native FreeBSD support for them. However, other alternatives also appear suitable for my use case.</span><br /> <br /> <br /> -<span>I'm looking forward to the next post in this series, where we will set up k3s (Kubernetes) on the Linux VMs.</span><br /> +<span>Read the next post of this series:</span><br /> +<br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage (You are currently reading this)</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> diff --git a/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html b/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html new file mode 100644 index 00000000..881fe7f4 --- /dev/null +++ b/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html @@ -0,0 +1,1088 @@ +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> +<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> +<head> +<meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> +<title>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</title> +<link rel="shortcut icon" type="image/gif" href="/favicon.ico" /> +<link rel="stylesheet" href="../style.css" /> +<link rel="stylesheet" href="style-override.css" /> +</head> +<body> +<p class="header"> +<a href="https://foo.zone">Home</a> | <a href="https://codeberg.org/snonux/foo.zone/src/branch/content-md/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.md">Markdown</a> | <a href="gemini://foo.zone/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.gmi">Gemini</a> +</p> +<h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-7-k3s-and-first-pod-deployments'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</h1><br /> +<br /> +<span class='quote'>Published at 2025-10-02T11:27:19+03:00</span><br /> +<br /> +<span>This is the seventh blog post about the f3s series for my self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.</span><br /> +<br /> +<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> +<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> +<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> +<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> +<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments (You are currently reading this)</a><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> +<br /> +<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> +<br /> +<ul> +<li><a href='#f3s-kubernetes-with-freebsd---part-7-k3s-and-first-pod-deployments'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a></li> +<li>⇢ <a href='#introduction'>Introduction</a></li> +<li>⇢ <a href='#updating'>Updating</a></li> +<li>⇢ <a href='#installing-k3s'>Installing k3s</a></li> +<li>⇢ ⇢ <a href='#generating-k3stoken-and-starting-the-first-k3s-node'>Generating <span class='inlinecode'>K3S_TOKEN</span> and starting the first k3s node</a></li> +<li>⇢ ⇢ <a href='#adding-the-remaining-nodes-to-the-cluster'>Adding the remaining nodes to the cluster</a></li> +<li>⇢ <a href='#test-deployments'>Test deployments</a></li> +<li>⇢ ⇢ <a href='#test-deployment-to-kubernetes'>Test deployment to Kubernetes</a></li> +<li>⇢ ⇢ <a href='#test-deployment-with-persistent-volume-claim'>Test deployment with persistent volume claim</a></li> +<li>⇢ ⇢ <a href='#scaling-traefik-for-faster-failover'>Scaling Traefik for faster failover</a></li> +<li>⇢ <a href='#make-it-accessible-from-the-public-internet'>Make it accessible from the public internet</a></li> +<li>⇢ ⇢ <a href='#openbsd-relayd-configuration'>OpenBSD relayd configuration</a></li> +<li>⇢ <a href='#deploying-the-private-docker-image-registry'>Deploying the private Docker image registry</a></li> +<li>⇢ ⇢ <a href='#prepare-the-nfs-backed-storage'>Prepare the NFS-backed storage</a></li> +<li>⇢ ⇢ <a href='#install-or-upgrade-the-chart'>Install (or upgrade) the chart</a></li> +<li>⇢ ⇢ <a href='#allow-nodes-and-workstations-to-trust-the-registry'>Allow nodes and workstations to trust the registry</a></li> +<li>⇢ ⇢ <a href='#pushing-and-pulling-images'>Pushing and pulling images</a></li> +<li>⇢ <a href='#example-anki-sync-server-from-the-private-registry'>Example: Anki Sync Server from the private registry</a></li> +<li>⇢ ⇢ <a href='#build-and-push-the-image'>Build and push the image</a></li> +<li>⇢ ⇢ <a href='#create-the-anki-secret-and-storage-on-the-cluster'>Create the Anki secret and storage on the cluster</a></li> +<li>⇢ ⇢ <a href='#deploy-the-chart'>Deploy the chart</a></li> +<li>⇢ <a href='#nfsv4-uid-mapping-for-postgres-backed-and-other-apps'>NFSv4 UID mapping for Postgres-backed (and other) apps</a></li> +<li>⇢ ⇢ <a href='#helm-charts-currently-in-service'>Helm charts currently in service</a></li> +</ul><br /> +<h2 style='display: inline' id='introduction'>Introduction</h2><br /> +<br /> +<span>In this blog post, I am finally going to install k3s (the Kubernetes distribution I use) to the whole setup and deploy the first workloads (helm charts, and a private registry) to it.</span><br /> +<br /> +<a class='textlink' href='https://k3s.io'>https://k3s.io</a><br /> +<br /> +<h2 style='display: inline' id='updating'>Updating</h2><br /> +<br /> +<span>Before proceeding, I bring all systems involved up-to-date. On all three Rocky Linux 9 boxes <span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, and <span class='inlinecode'>r2</span>:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>dnf update -y +reboot +</pre> +<br /> +<span>On the FreeBSD hosts, I upgraded from FreeBSD 14.2 to 14.3-RELEASE, running this on all three hosts <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas freebsd-update fetch +paul@f0:~ % doas freebsd-update install +paul@f0:~ % doas reboot +. +. +. +paul@f0:~ % doas freebsd-update -r <font color="#000000">14.3</font>-RELEASE upgrade +paul@f0:~ % doas freebsd-update install +paul@f0:~ % doas freebsd-update install +paul@f0:~ % doas reboot +. +. +. +paul@f0:~ % doas freebsd-update install +paul@f0:~ % doas pkg update +paul@f0:~ % doas pkg upgrade +paul@f0:~ % doas reboot +. +. +. +paul@f0:~ % uname -a +FreeBSD f0.lan.buetow.org <font color="#000000">14.3</font>-RELEASE FreeBSD <font color="#000000">14.3</font>-RELEASE + releng/<font color="#000000">14.3</font>-n<font color="#000000">271432</font>-8c9ce319fef7 GENERIC amd64 +</pre> +<br /> +<h2 style='display: inline' id='installing-k3s'>Installing k3s</h2><br /> +<br /> +<h3 style='display: inline' id='generating-k3stoken-and-starting-the-first-k3s-node'>Generating <span class='inlinecode'>K3S_TOKEN</span> and starting the first k3s node</h3><br /> +<br /> +<span>I generated the k3s token on my Fedora laptop with <span class='inlinecode'>pwgen -n 32</span> and selected one of the results. Then, on all three <span class='inlinecode'>r</span> hosts, I ran the following (replace SECRET_TOKEN with the actual secret):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># echo -n SECRET_TOKEN > ~/.k3s_token</font></i> +</pre> +<br /> +<span>The following steps are also documented on the k3s website:</span><br /> +<br /> +<a class='textlink' href='https://docs.k3s.io/datastore/ha-embedded'>https://docs.k3s.io/datastore/ha-embedded</a><br /> +<br /> +<span>To bootstrap k3s on the first node, I ran this on <span class='inlinecode'>r0</span>:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i> + sh -s - server --cluster-init --tls-san=r0.wg0.wan.buetow.org +[INFO] Finding release <b><u><font color="#000000">for</font></u></b> channel stable +[INFO] Using v1.<font color="#000000">32.6</font>+k3s1 as release +. +. +. +[INFO] systemd: Starting k3s +</pre> +<br /> +<h3 style='display: inline' id='adding-the-remaining-nodes-to-the-cluster'>Adding the remaining nodes to the cluster</h3><br /> +<br /> +<span>Then I ran on the other two nodes <span class='inlinecode'>r1</span> and <span class='inlinecode'>r2</span>:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r1 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i> + sh -s - server --server https://r<font color="#000000">0</font>.wg0.wan.buetow.org:<font color="#000000">6443</font> \ + --tls-san=r1.wg0.wan.buetow.org + +[root@r2 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i> + sh -s - server --server https://r<font color="#000000">0</font>.wg0.wan.buetow.org:<font color="#000000">6443</font> \ + --tls-san=r2.wg0.wan.buetow.org +. +. +. + +</pre> +<br /> +<span>Once done, I had a three-node Kubernetes cluster control plane:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># kubectl get nodes</font></i> +NAME STATUS ROLES AGE VERSION +r0.lan.buetow.org Ready control-plane,etcd,master 4m44s v1.<font color="#000000">32.6</font>+k3s1 +r1.lan.buetow.org Ready control-plane,etcd,master 3m13s v1.<font color="#000000">32.6</font>+k3s1 +r2.lan.buetow.org Ready control-plane,etcd,master 30s v1.<font color="#000000">32.6</font>+k3s1 + +[root@r0 ~]<i><font color="silver"># kubectl get pods --all-namespaces</font></i> +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system coredns-5688667fd4-fs2jj <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 5m27s +kube-system helm-install-traefik-crd-f9hgd <font color="#000000">0</font>/<font color="#000000">1</font> Completed <font color="#000000">0</font> 5m27s +kube-system helm-install-traefik-zqqqk <font color="#000000">0</font>/<font color="#000000">1</font> Completed <font color="#000000">2</font> 5m27s +kube-system local-path-provisioner-774c6665dc-jqlnc <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 5m27s +kube-system metrics-server-6f4c6675d5-5xpmp <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 5m27s +kube-system svclb-traefik-411cec5b-cdp2l <font color="#000000">2</font>/<font color="#000000">2</font> Running <font color="#000000">0</font> 78s +kube-system svclb-traefik-411cec5b-f625r <font color="#000000">2</font>/<font color="#000000">2</font> Running <font color="#000000">0</font> 4m58s +kube-system svclb-traefik-411cec5b-twrd<font color="#000000">7</font> <font color="#000000">2</font>/<font color="#000000">2</font> Running <font color="#000000">0</font> 4m2s +kube-system traefik-c98fdf6fb-lt6fx <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 4m58s +</pre> +<br /> +<span>In order to connect with <span class='inlinecode'>kubectl</span> from my Fedora laptop, I had to copy <span class='inlinecode'>/etc/rancher/k3s/k3s.yaml</span> from <span class='inlinecode'>r0</span> to <span class='inlinecode'>~/.kube/config</span> and then replace the value of the server field with <span class='inlinecode'>r0.lan.buetow.org</span>. kubectl can now manage the cluster. Note that this step has to be repeated when I want to connect to another node of the cluster (e.g. when <span class='inlinecode'>r0</span> is down).</span><br /> +<br /> +<h2 style='display: inline' id='test-deployments'>Test deployments</h2><br /> +<br /> +<h3 style='display: inline' id='test-deployment-to-kubernetes'>Test deployment to Kubernetes</h3><br /> +<br /> +<span>Let's create a test namespace:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl create namespace <b><u><font color="#000000">test</font></u></b> +namespace/test created + +> ~ kubectl get namespaces +NAME STATUS AGE +default Active 6h11m +kube-node-lease Active 6h11m +kube-public Active 6h11m +kube-system Active 6h11m +<b><u><font color="#000000">test</font></u></b> Active 5s + +> ~ kubectl config set-context --current --namespace=<b><u><font color="#000000">test</font></u></b> +Context <font color="#808080">"default"</font> modified. +</pre> +<br /> +<span>And let's also create an Apache test pod:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ cat <<END > apache-deployment.yaml +<i><font color="silver"># Apache HTTP Server Deployment</font></i> +apiVersion: apps/v<font color="#000000">1</font> +kind: Deployment +metadata: + name: apache-deployment +spec: + replicas: <font color="#000000">1</font> + selector: + matchLabels: + app: apache + template: + metadata: + labels: + app: apache + spec: + containers: + - name: apache + image: httpd:latest + ports: + <i><font color="silver"># Container port where Apache listens</font></i> + - containerPort: <font color="#000000">80</font> +END + +> ~ kubectl apply -f apache-deployment.yaml +deployment.apps/apache-deployment created + +> ~ kubectl get all +NAME READY STATUS RESTARTS AGE +pod/apache-deployment-5fd955856f-4pjmf <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 7s + +NAME READY UP-TO-DATE AVAILABLE AGE +deployment.apps/apache-deployment <font color="#000000">1</font>/<font color="#000000">1</font> <font color="#000000">1</font> <font color="#000000">1</font> 7s + +NAME DESIRED CURRENT READY AGE +replicaset.apps/apache-deployment-5fd955856f <font color="#000000">1</font> <font color="#000000">1</font> <font color="#000000">1</font> 7s +</pre> +<br /> +<span>Let's also create a service: </span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ cat <<END > apache-service.yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app: apache + name: apache-service +spec: + ports: + - name: web + port: <font color="#000000">80</font> + protocol: TCP + <i><font color="silver"># Expose port 80 on the service</font></i> + targetPort: <font color="#000000">80</font> + selector: + <i><font color="silver"># Link this service to pods with the label app=apache</font></i> + app: apache +END + +> ~ kubectl apply -f apache-service.yaml +service/apache-service created + +> ~ kubectl get service +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +apache-service ClusterIP <font color="#000000">10.43</font>.<font color="#000000">249.165</font> <none> <font color="#000000">80</font>/TCP 4s +</pre> +<br /> +<span>Now let's create an ingress:</span><br /> +<br /> +<span class='quote'>Note: I've modified the hosts listed in this example after I published this blog post to ensure that there aren't any bots scraping it.</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ cat <<END > apache-ingress.yaml + +apiVersion: networking.k8s.io/v<font color="#000000">1</font> +kind: Ingress +metadata: + name: apache-ingress + namespace: <b><u><font color="#000000">test</font></u></b> + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> + - host: standby.f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> + - host: www.f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> +END + +> ~ kubectl apply -f apache-ingress.yaml +ingress.networking.k8s.io/apache-ingress created + +> ~ kubectl describe ingress +Name: apache-ingress +Labels: <none> +Namespace: <b><u><font color="#000000">test</font></u></b> +Address: <font color="#000000">192.168</font>.<font color="#000000">1.120</font>,<font color="#000000">192.168</font>.<font color="#000000">1.121</font>,<font color="#000000">192.168</font>.<font color="#000000">1.122</font> +Ingress Class: traefik +Default backend: <default> +Rules: + Host Path Backends + ---- ---- -------- + f3s.foo.zone + / apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>) + standby.f3s.foo.zone + / apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>) + www.f3s.foo.zone + / apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>) +Annotations: spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +Events: <none> +</pre> +<br /> +<span>Notes: </span><br /> +<br /> +<ul> +<li>In the ingress, I use plain HTTP (web) for the Traefik rule, as all the "production" traffic will be routed through a WireGuard tunnel anyway, as I will show later.</li> +</ul><br /> +<span>So I tested the Apache web server through the ingress rule:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ curl -H <font color="#808080">"Host: www.f3s.foo.zone"</font> http://r<font color="#000000">0</font>.lan.buetow.org:<font color="#000000">80</font> +<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> +</pre> +<br /> +<h3 style='display: inline' id='test-deployment-with-persistent-volume-claim'>Test deployment with persistent volume claim</h3><br /> +<br /> +<span>Next, I modified the Apache example to serve the <span class='inlinecode'>htdocs</span> directory from the NFS share I created in the previous blog post. I used the following manifests. Most of them are the same as before, except for the persistent volume claim and the volume mount in the Apache deployment.</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ cat <<END > apache-deployment.yaml +<i><font color="silver"># Apache HTTP Server Deployment</font></i> +apiVersion: apps/v<font color="#000000">1</font> +kind: Deployment +metadata: + name: apache-deployment + namespace: <b><u><font color="#000000">test</font></u></b> +spec: + replicas: <font color="#000000">2</font> + selector: + matchLabels: + app: apache + template: + metadata: + labels: + app: apache + spec: + containers: + - name: apache + image: httpd:latest + ports: + <i><font color="silver"># Container port where Apache listens</font></i> + - containerPort: <font color="#000000">80</font> + readinessProbe: + httpGet: + path: / + port: <font color="#000000">80</font> + initialDelaySeconds: <font color="#000000">5</font> + periodSeconds: <font color="#000000">10</font> + livenessProbe: + httpGet: + path: / + port: <font color="#000000">80</font> + initialDelaySeconds: <font color="#000000">15</font> + periodSeconds: <font color="#000000">10</font> + volumeMounts: + - name: apache-htdocs + mountPath: /usr/local/apache<font color="#000000">2</font>/htdocs/ + volumes: + - name: apache-htdocs + persistentVolumeClaim: + claimName: example-apache-pvc +END + +> ~ cat <<END > apache-ingress.yaml +apiVersion: networking.k8s.io/v<font color="#000000">1</font> +kind: Ingress +metadata: + name: apache-ingress + namespace: <b><u><font color="#000000">test</font></u></b> + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> + - host: standby.f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> + - host: www.f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> +END + +> ~ cat <<END > apache-persistent-volume.yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: example-apache-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/example-apache-volume-claim + <b><u><font color="#000000">type</font></u></b>: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: example-apache-pvc + namespace: <b><u><font color="#000000">test</font></u></b> +spec: + storageClassName: <font color="#808080">""</font> + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +END + +> ~ cat <<END > apache-service.yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app: apache + name: apache-service + namespace: <b><u><font color="#000000">test</font></u></b> +spec: + ports: + - name: web + port: <font color="#000000">80</font> + protocol: TCP + <i><font color="silver"># Expose port 80 on the service</font></i> + targetPort: <font color="#000000">80</font> + selector: + <i><font color="silver"># Link this service to pods with the label app=apache</font></i> + app: apache +END +</pre> +<br /> +<span>I applied the manifests:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl apply -f apache-persistent-volume.yaml +> ~ kubectl apply -f apache-service.yaml +> ~ kubectl apply -f apache-deployment.yaml +> ~ kubectl apply -f apache-ingress.yaml +</pre> +<br /> +<span>Looking at the deployment, I could see it failed because the directory didn't exist yet on the NFS share (note that I also increased the replica count to 2 so if one node goes down there's already a replica running on another node for faster failover):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl get pods +NAME READY STATUS RESTARTS AGE +apache-deployment-5b96bd6b6b-fv2jx <font color="#000000">0</font>/<font color="#000000">1</font> ContainerCreating <font color="#000000">0</font> 9m15s +apache-deployment-5b96bd6b6b-ax2ji <font color="#000000">0</font>/<font color="#000000">1</font> ContainerCreating <font color="#000000">0</font> 9m15s + +> ~ kubectl describe pod apache-deployment-5b96bd6b6b-fv2jx | tail -n <font color="#000000">5</font> +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 9m34s default-scheduler Successfully + assigned test/apache-deployment-5b96bd6b6b-fv2jx to r2.lan.buetow.org + Warning FailedMount 80s (x12 over 9m34s) kubelet MountVolume.SetUp + failed <b><u><font color="#000000">for</font></u></b> volume <font color="#808080">"example-apache-pv"</font> : hostPath <b><u><font color="#000000">type</font></u></b> check failed: + /data/nfs/k3svolumes/example-apache is not a directory +</pre> +<br /> +<span>That's intentional—I needed to create the directory on the NFS share first, so I did that (e.g. on <span class='inlinecode'>r0</span>):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># mkdir /data/nfs/k3svolumes/example-apache-volume-claim/</font></i> + +[root@r0 ~]<i><font color="silver"># cat <<END > /data/nfs/k3svolumes/example-apache-volume-claim/index.html</font></i> +<!DOCTYPE html> +<html> +<head> + <title>Hello, it works</title> +</head> +<body> + <h1>Hello, it works!</h<font color="#000000">1</font>> + <p>This site is served via a PVC!</p> +</body> +</html> +END +</pre> +<br /> +<span>The <span class='inlinecode'>index.html</span> file gives us some actual content to serve. After deleting the pod, it recreates itself and the volume mounts correctly:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl delete pod apache-deployment-5b96bd6b6b-fv2jx + +> ~ curl -H <font color="#808080">"Host: www.f3s.foo.zone"</font> http://r<font color="#000000">0</font>.lan.buetow.org:<font color="#000000">80</font> +<!DOCTYPE html> +<html> +<head> + <title>Hello, it works</title> +</head> +<body> + <h1>Hello, it works!</h<font color="#000000">1</font>> + <p>This site is served via a PVC!</p> +</body> +</html> +</pre> +<br /> +<h3 style='display: inline' id='scaling-traefik-for-faster-failover'>Scaling Traefik for faster failover</h3><br /> +<br /> +<span>Traefik (used for ingress on k3s) ships with a single replica by default, but for faster failover I bumped it to two replicas so each worker node runs one pod. That way, if a node disappears, the service stays up while Kubernetes schedules a replacement. Here's the command I used:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl -n kube-system scale deployment traefik --replicas=<font color="#000000">2</font> +</pre> +<br /> +<span>And the result:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl -n kube-system get pods -l app.kubernetes.io/name=traefik +kube-system traefik-c98fdf6fb-97kqk <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">19</font> (53d ago) 64d +kube-system traefik-c98fdf6fb-9npg2 <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">11</font> (53d ago) 61d +</pre> +<br /> +<h2 style='display: inline' id='make-it-accessible-from-the-public-internet'>Make it accessible from the public internet</h2><br /> +<br /> +<span>Next, I made this accessible through the public internet via the <span class='inlinecode'>www.f3s.foo.zone</span> hosts. As a reminder from part 1 of this series, I reviewed the section titled "OpenBSD/relayd to the rescue for external connectivity":</span><br /> +<br /> +<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> +<br /> +<span class='quote'>All apps should be reachable through the internet (e.g., from my phone or computer when travelling). For external connectivity and TLS management, I've got two OpenBSD VMs (one hosted by OpenBSD Amsterdam and another hosted by Hetzner) handling public-facing services like DNS, relaying traffic, and automating Let's Encrypt certificates.</span><br /> +<br /> +<span class='quote'>All of this (every Linux VM to every OpenBSD box) will be connected via WireGuard tunnels, keeping everything private and secure. There will be 6 WireGuard tunnels (3 k3s nodes times two OpenBSD VMs).</span><br /> +<br /> +<span class='quote'>So, when I want to access a service running in k3s, I will hit an external DNS endpoint (with the authoritative DNS servers being the OpenBSD boxes). The DNS will resolve to the master OpenBSD VM (see my KISS highly-available with OpenBSD blog post), and from there, the relayd process (with a Let's Encrypt certificate—see my Let's Encrypt with OpenBSD and Rex blog post) will accept the TCP connection and forward it through the WireGuard tunnel to a reachable node port of one of the k3s nodes, thus serving the traffic.</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ curl https://f3s.foo.zone +<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> + +> ~ curl https://www.f3s.foo.zone +<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> + +> ~ curl https://standby.f3s.foo.zone +<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> +</pre> +<br /> +<span>This is how it works in <span class='inlinecode'>relayd.conf</span> on OpenBSD:</span><br /> +<br /> +<h3 style='display: inline' id='openbsd-relayd-configuration'>OpenBSD relayd configuration</h3><br /> +<br /> +<span>The OpenBSD edge relays keep the Kubernetes-facing addresses for the f3s ingress endpoints in a shared backend table so TLS traffic for every <span class='inlinecode'>f3s</span> hostname lands on the same pool of k3s nodes (pointing to the WireGuard IP addresses of those nodes - remember, they are running locally in my LAN, wheras the OpenBSD edge relays operate in the public internet):</span><br /> +<br /> +<pre> +table <f3s> { + 192.168.2.120 + 192.168.2.121 + 192.168.2.122 +} +</pre> +<br /> +<span>Inside the <span class='inlinecode'>http protocol "https"</span> block each public hostname gets its Let's Encrypt certificate and is matched to that backend table. Besides the primary trio, every service-specific hostname (<span class='inlinecode'>anki</span>, <span class='inlinecode'>bag</span>, <span class='inlinecode'>flux</span>, <span class='inlinecode'>audiobookshelf</span>, <span class='inlinecode'>gpodder</span>, <span class='inlinecode'>radicale</span>, <span class='inlinecode'>vault</span>, <span class='inlinecode'>syncthing</span>, <span class='inlinecode'>uprecords</span>) and their <span class='inlinecode'>www</span> / <span class='inlinecode'>standby</span> aliases reuse the same pool so new apps can go live just by publishing an ingress rule, whereas they will all map to a service running in k3s:</span><br /> +<br /> +<pre> +http protocol "https" { + tls keypair f3s.foo.zone + tls keypair www.f3s.foo.zone + tls keypair standby.f3s.foo.zone + tls keypair anki.f3s.foo.zone + tls keypair www.anki.f3s.foo.zone + tls keypair standby.anki.f3s.foo.zone + tls keypair bag.f3s.foo.zone + tls keypair www.bag.f3s.foo.zone + tls keypair standby.bag.f3s.foo.zone + tls keypair flux.f3s.foo.zone + tls keypair www.flux.f3s.foo.zone + tls keypair standby.flux.f3s.foo.zone + tls keypair audiobookshelf.f3s.foo.zone + tls keypair www.audiobookshelf.f3s.foo.zone + tls keypair standby.audiobookshelf.f3s.foo.zone + tls keypair gpodder.f3s.foo.zone + tls keypair www.gpodder.f3s.foo.zone + tls keypair standby.gpodder.f3s.foo.zone + tls keypair radicale.f3s.foo.zone + tls keypair www.radicale.f3s.foo.zone + tls keypair standby.radicale.f3s.foo.zone + tls keypair vault.f3s.foo.zone + tls keypair www.vault.f3s.foo.zone + tls keypair standby.vault.f3s.foo.zone + tls keypair syncthing.f3s.foo.zone + tls keypair www.syncthing.f3s.foo.zone + tls keypair standby.syncthing.f3s.foo.zone + tls keypair uprecords.f3s.foo.zone + tls keypair www.uprecords.f3s.foo.zone + tls keypair standby.uprecords.f3s.foo.zone + + match request quick header "Host" value "f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "anki.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.anki.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.anki.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "bag.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.bag.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.bag.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "flux.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.flux.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.flux.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "audiobookshelf.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.audiobookshelf.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.audiobookshelf.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "gpodder.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.gpodder.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.gpodder.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "radicale.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.radicale.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.radicale.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "vault.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.vault.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.vault.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "syncthing.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.syncthing.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.syncthing.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "uprecords.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.uprecords.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.uprecords.f3s.foo.zone" forward to <f3s> +} +</pre> +<br /> +<span>Both IPv4 and IPv6 listeners reuse the same protocol definition, making the relay transparent for dual-stack clients while still health checking every k3s backend before forwarding traffic over WireGuard:</span><br /> +<br /> +<pre> +relay "https4" { + listen on 46.23.94.99 port 443 tls + protocol "https" + forward to <f3s> port 80 check tcp +} + +relay "https6" { + listen on 2a03:6000:6f67:624::99 port 443 tls + protocol "https" + forward to <f3s> port 80 check tcp +} +</pre> +<br /> +<span>In practice, that means relayd terminates TLS with the correct certificate, keeps the three WireGuard-connected backends in rotation, and ships each request to whichever bhyve VM answers first.</span><br /> +<br /> +<h2 style='display: inline' id='deploying-the-private-docker-image-registry'>Deploying the private Docker image registry</h2><br /> +<br /> +<span>As not all Docker images I want to deploy are available on public Docker registries and as I also build some of them by myself, there is the need of a private registry. </span><br /> +<br /> +<span>All manifests for the f3s stack live in my configuration repository:</span><br /> +<br /> +<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s'>codeberg.org/snonux/conf/f3s</a><br /> +<br /> +<span>Within that repo, the <span class='inlinecode'>examples/conf/f3s/registry/</span> directory contains the Helm chart, a <span class='inlinecode'>Justfile</span>, and a detailed <span class='inlinecode'>README</span>. Here's the condensed walkthrough I used to roll out the registry with Helm.</span><br /> +<br /> +<h3 style='display: inline' id='prepare-the-nfs-backed-storage'>Prepare the NFS-backed storage</h3><br /> +<br /> +<span>Create the directory that will hold the registry blobs on the NFS share (I ran this on <span class='inlinecode'>r0</span>, but any node that exports <span class='inlinecode'>/data/nfs/k3svolumes</span> works):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># mkdir -p /data/nfs/k3svolumes/registry</font></i> +</pre> +<br /> +<h3 style='display: inline' id='install-or-upgrade-the-chart'>Install (or upgrade) the chart</h3><br /> +<br /> +<span>Clone the repo (or pull the latest changes) on a workstation that has <span class='inlinecode'>helm</span> configured for the cluster, then deploy the chart. The Justfile wraps the commands, but the raw Helm invocation looks like this:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ git clone https://codeberg.org/snonux/conf/f3s.git +$ cd conf/f3s/examples/conf/f3s/registry +$ helm upgrade --install registry ./helm-chart --namespace infra --create-namespace +</pre> +<br /> +<span>Helm creates the <span class='inlinecode'>infra</span> namespace if it does not exist, provisions a <span class='inlinecode'>PersistentVolume</span>/<span class='inlinecode'>PersistentVolumeClaim</span> pair that points at <span class='inlinecode'>/data/nfs/k3svolumes/registry</span>, and spins up a single registry pod exposed via the <span class='inlinecode'>docker-registry-service</span> NodePort (<span class='inlinecode'>30001</span>). Verify everything is up before continuing:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ kubectl get pods --namespace infra +NAME READY STATUS RESTARTS AGE +docker-registry-6bc9bb46bb-6grkr <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">6</font> (53d ago) 54d + +$ kubectl get svc docker-registry-service -n infra +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +docker-registry-service NodePort <font color="#000000">10.43</font>.<font color="#000000">141.56</font> <none> <font color="#000000">5000</font>:<font color="#000000">30001</font>/TCP 54d +</pre> +<br /> +<h3 style='display: inline' id='allow-nodes-and-workstations-to-trust-the-registry'>Allow nodes and workstations to trust the registry</h3><br /> +<br /> +<span>The registry listens on plain HTTP, so both Docker daemons on workstations and the k3s nodes need to treat it as an insecure registry. That's fine for my personal needs, as:</span><br /> +<br /> +<ul> +<li>I don't store any secrets in the images</li> +<li>I access the registry this way only via my LAN</li> +<li>I may will change it later on...</li> +</ul><br /> +<span>On my Fedora workstation where I build images:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ cat <<<font color="#808080">"EOF"</font> | sudo tee /etc/docker/daemon.json >/dev/null +{ + <font color="#808080">"insecure-registries"</font>: [ + <font color="#808080">"r0.lan.buetow.org:30001"</font>, + <font color="#808080">"r1.lan.buetow.org:30001"</font>, + <font color="#808080">"r2.lan.buetow.org:30001"</font> + ] +} +EOF +$ sudo systemctl restart docker +</pre> +<br /> +<span>On each k3s node, make <span class='inlinecode'>registry.lan.buetow.org</span> resolve locally and point k3s at the NodePort:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ <b><u><font color="#000000">for</font></u></b> node <b><u><font color="#000000">in</font></u></b> r0 r1 r2; <b><u><font color="#000000">do</font></u></b> +> ssh root@$node <font color="#808080">"echo '127.0.0.1 registry.lan.buetow.org' >> /etc/hosts"</font> +> <b><u><font color="#000000">done</font></u></b> + +$ <b><u><font color="#000000">for</font></u></b> node <b><u><font color="#000000">in</font></u></b> r0 r1 r2; <b><u><font color="#000000">do</font></u></b> +> ssh root@$node <font color="#808080">"cat <<'EOF' > /etc/rancher/k3s/registries.yaml</font> +<font color="#808080">mirrors:</font> +<font color="#808080"> "</font>registry.lan.buetow.org:<font color="#000000">30001</font><font color="#808080">":</font> +<font color="#808080"> endpoint:</font> +<font color="#808080"> - "</font>http://localhost:<font color="#000000">30001</font><font color="#808080">"</font> +<font color="#808080">EOF</font> +<font color="#808080">systemctl restart k3s"</font> +> <b><u><font color="#000000">done</font></u></b> +</pre> +<br /> +<span>Thanks to the relayd configuration earlier in the post, the external hostnames (<span class='inlinecode'>f3s.foo.zone</span>, etc.) can already reach NodePort <span class='inlinecode'>30001</span>, so publishing the registry later to the outside world is just a matter of wiring the DNS the same way as the ingress hosts. But by default, that's not enabled for now due to security reasons.</span><br /> +<br /> +<h3 style='display: inline' id='pushing-and-pulling-images'>Pushing and pulling images</h3><br /> +<br /> +<span>Tag any locally built image with one of the node IPs on port <span class='inlinecode'>30001</span>, then push it. I usually target whichever node is closest to me, but any of the three will do:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ docker tag my-app:latest r0.lan.buetow.org:<font color="#000000">30001</font>/my-app:latest +$ docker push r0.lan.buetow.org:<font color="#000000">30001</font>/my-app:latest +</pre> +<br /> +<span>Inside the cluster (or from other nodes), reference the image via the service name that Helm created:</span><br /> +<br /> +<pre> +image: docker-registry-service:5000/my-app:latest +</pre> +<br /> +<span>You can test the pull path straight away:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ kubectl run registry-test \ +> --image=docker-registry-service:<font color="#000000">5000</font>/my-app:latest \ +> --restart=Never -n <b><u><font color="#000000">test</font></u></b> --command -- sleep <font color="#000000">300</font> +</pre> +<br /> +<span>If the pod pulls successfully, the private registry is ready for use by the rest of the workloads. Note, that the commands above actually don't work, they are only for illustration purpose mentioned here.</span><br /> +<br /> +<h2 style='display: inline' id='example-anki-sync-server-from-the-private-registry'>Example: Anki Sync Server from the private registry</h2><br /> +<br /> +<span>One of the first workloads I migrated onto the k3s cluster after standing up the registry was my Anki sync server. The configuration repo ships everything in <span class='inlinecode'>examples/conf/f3s/anki-sync-server/</span>: a Docker build context plus a Helm chart that references the freshly built image.</span><br /> +<br /> +<h3 style='display: inline' id='build-and-push-the-image'>Build and push the image</h3><br /> +<br /> +<span>The Dockerfile lives under <span class='inlinecode'>docker-image/</span> and takes the Anki release to compile as an <span class='inlinecode'>ANKI_VERSION</span> build argument. The accompanying <span class='inlinecode'>Justfile</span> wraps the steps, but the raw commands look like this:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ cd conf/f3s/examples/conf/f3s/anki-sync-server/docker-image +$ docker build -t anki-sync-server:<font color="#000000">25.07</font>.5b --build-arg ANKI_VERSION=<font color="#000000">25.07</font>.<font color="#000000">5</font> . +$ docker tag anki-sync-server:<font color="#000000">25.07</font>.5b \ + r0.lan.buetow.org:<font color="#000000">30001</font>/anki-sync-server:<font color="#000000">25.07</font>.5b +$ docker push r0.lan.buetow.org:<font color="#000000">30001</font>/anki-sync-server:<font color="#000000">25.07</font>.5b +</pre> +<br /> +<span>Because every k3s node treats <span class='inlinecode'>registry.lan.buetow.org:30001</span> as an insecure mirror (see above), the push succeeds regardless of which node answers. If you prefer the shortcut, <span class='inlinecode'>just f3s</span> in that directory performs the same build/tag/push sequence.</span><br /> +<br /> +<h3 style='display: inline' id='create-the-anki-secret-and-storage-on-the-cluster'>Create the Anki secret and storage on the cluster</h3><br /> +<br /> +<span>The Helm chart expects the <span class='inlinecode'>services</span> namespace, a pre-created NFS directory, and a Kubernetes secret that holds the credentials the upstream container understands:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ ssh root@r0 <font color="#808080">"mkdir -p /data/nfs/k3svolumes/anki-sync-server/anki_data"</font> +$ kubectl create namespace services +$ kubectl create secret generic anki-sync-server-secret \ + --from-literal=SYNC_USER1=<font color="#808080">'paul:SECRETPASSWORD'</font> \ + -n services +</pre> +<br /> +<span>If the <span class='inlinecode'>services</span> namespace already exists, you can skip that line or let Kubernetes tell you the namespace is unchanged.</span><br /> +<br /> +<h3 style='display: inline' id='deploy-the-chart'>Deploy the chart</h3><br /> +<br /> +<span>With the prerequisites in place, install (or upgrade) the chart. It pins the container image to the tag we just pushed and mounts the NFS export via a <span class='inlinecode'>PersistentVolume/PersistentVolumeClaim</span> pair:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ cd ../helm-chart +$ helm upgrade --install anki-sync-server . -n services +</pre> +<br /> +<span>Helm provisions everything referenced in the templates:</span><br /> +<br /> +<pre> +containers: +- name: anki-sync-server image: registry.lan.buetow.org:30001/anki-sync-server:25.07.5b + volumeMounts: + - name: anki-data + mountPath: /anki_data +</pre> +<br /> +<span>Once the release comes up, verify that the pod pulled the freshly pushed image and that the ingress we configured earlier resolves through relayd just like the Apache example.</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ kubectl get pods -n services +$ kubectl get ingress anki-sync-server-ingress -n services +$ curl https://anki.f3s.foo.zone/health +</pre> +<br /> +<span>All of this runs solely on first-party images that now live in the private registry, proving the full flow from local bild to WireGuard-exposed service.</span><br /> +<br /> +<h2 style='display: inline' id='nfsv4-uid-mapping-for-postgres-backed-and-other-apps'>NFSv4 UID mapping for Postgres-backed (and other) apps</h2><br /> +<br /> +<span>NFSv4 only sees numeric user and group IDs, so the <span class='inlinecode'>postgres</span> account created inside the container must exist with the same UID/GID on the Kubernetes worker and on the FreeBSD NFS servers. Otherwise the pod starts with UID 999, the export sees it as an unknown anonymous user, and Postgres fails to initialise its data directory.</span><br /> +<br /> +<span>To verify things line up end-to-end I run <span class='inlinecode'>id</span> in the container and on the hosts:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl <b><u><font color="#000000">exec</font></u></b> -n services deploy/miniflux-postgres -- id postgres +uid=<font color="#000000">999</font>(postgres) gid=<font color="#000000">999</font>(postgres) groups=<font color="#000000">999</font>(postgres) + +[root@r0 ~]<i><font color="silver"># id postgres</font></i> +uid=<font color="#000000">999</font>(postgres) gid=<font color="#000000">999</font>(postgres) groups=<font color="#000000">999</font>(postgres) + +paul@f0:~ % doas id postgres +uid=<font color="#000000">999</font>(postgres) gid=<font color="#000000">99</font>(postgres) groups=<font color="#000000">999</font>(postgres) +</pre> +<br /> +<span>The Rocky Linux workers get their matching user with plain <span class='inlinecode'>useradd</span>/<span class='inlinecode'>groupadd</span> (repeat on <span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, and <span class='inlinecode'>r2</span>):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># groupadd --gid 999 postgres</font></i> +[root@r0 ~]<i><font color="silver"># useradd --uid 999 --gid 999 \</font></i> + --home-dir /var/lib/pgsql \ + --shell /sbin/nologin postgres +</pre> +<br /> +<span>FreeBSD uses <span class='inlinecode'>pw</span>, so on each NFS server (<span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, <span class='inlinecode'>f2</span>) I created the same account and disabled shell access:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas pw groupadd postgres -g <font color="#000000">999</font> +paul@f0:~ % doas pw useradd postgres -u <font color="#000000">999</font> -g postgres \ + -d /var/db/postgres -s /usr/sbin/nologin +</pre> +<br /> +<span>Once the UID/GID exist everywhere, the Miniflux chart in <span class='inlinecode'>examples/conf/f3s/miniflux</span> deploys cleanly. The chart provisions both the application and its bundled Postgres database, mounts the exported directory, and builds the DSN at runtime. The important bits live in <span class='inlinecode'>helm-chart/templates/persistent-volumes.yaml</span> and <span class='inlinecode'>deployment.yaml</span>:</span><br /> +<br /> +<pre> +# Persistent volume lives on the NFS export +hostPath: + path: /data/nfs/k3svolumes/miniflux/data + type: Directory +... +containers: +- name: miniflux-postgres + image: postgres:17 + volumeMounts: + - name: miniflux-postgres-data + mountPath: /var/lib/postgresql/data +</pre> +<br /> +<span>Follow the <span class='inlinecode'>README</span> beside the chart to create the secrets and the target directory:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ cd examples/conf/f3s/miniflux/helm-chart +$ mkdir -p /data/nfs/k3svolumes/miniflux/data +$ kubectl create secret generic miniflux-db-password \ + --from-literal=fluxdb_password=<font color="#808080">'YOUR_PASSWORD'</font> -n services +$ kubectl create secret generic miniflux-admin-password \ + --from-literal=admin_password=<font color="#808080">'YOUR_ADMIN_PASSWORD'</font> -n services +$ helm upgrade --install miniflux . -n services --create-namespace +</pre> +<br /> +<span>And to verify it's all up:</span><br /> +<br /> +<pre> +$ kubectl get all --namespace=services | grep mini +pod/miniflux-postgres-556444cb8d-xvv2p 1/1 Running 0 54d +pod/miniflux-server-85d7c64664-stmt9 1/1 Running 0 54d +service/miniflux ClusterIP 10.43.47.80 <none> 8080/TCP 54d +service/miniflux-postgres ClusterIP 10.43.139.50 <none> 5432/TCP 54d +deployment.apps/miniflux-postgres 1/1 1 1 54d +deployment.apps/miniflux-server 1/1 1 1 54d +replicaset.apps/miniflux-postgres-556444cb8d 1 1 1 54d +replicaset.apps/miniflux-server-85d7c64664 1 1 1 54d +</pre> +<br /> +<span>Or from the repository root I simply run:</span><br /> +<br /> +<h3 style='display: inline' id='helm-charts-currently-in-service'>Helm charts currently in service</h3><br /> +<br /> +<span>These are the charts that already live under <span class='inlinecode'>examples/conf/f3s</span> and run on the cluster today (and I'll keep adding more as new services graduate into production):</span><br /> +<br /> +<ul> +<li><span class='inlinecode'>anki-sync-server</span> — custom-built image served from the private registry, stores decks on <span class='inlinecode'>/data/nfs/k3svolumes/anki-sync-server/anki_data</span>, and authenticates through the <span class='inlinecode'>anki-sync-server-secret</span>.</li> +<li><span class='inlinecode'>audiobookshelf</span> — media streaming stack with three hostPath mounts (<span class='inlinecode'>config</span>, <span class='inlinecode'>audiobooks</span>, <span class='inlinecode'>podcasts</span>) so the library survives node rebuilds.</li> +<li><span class='inlinecode'>example-apache</span> — minimal HTTP service I use for smoke-testing ingress and relayd rules.</li> +<li><span class='inlinecode'>example-apache-volume-claim</span> — Apache plus PVC variant that exercises NFS-backed storage for walkthroughs like the one earlier in this post.</li> +<li><span class='inlinecode'>miniflux</span> — the Postgres-backed feed reader described above, wired for NFSv4 UID mapping and per-release secrets.</li> +<li><span class='inlinecode'>opodsync</span> — podsync deployment with its data directory under <span class='inlinecode'>/data/nfs/k3svolumes/opodsync/data</span>.</li> +<li><span class='inlinecode'>radicale</span> — CalDAV/CardDAV (and gpodder) backend with separate <span class='inlinecode'>collections</span> and <span class='inlinecode'>auth</span> volumes.</li> +<li><span class='inlinecode'>registry</span> — the plain-HTTP Docker registry exposed on NodePort 30001 and mirrored internally as <span class='inlinecode'>registry.lan.buetow.org:30001</span>.</li> +<li><span class='inlinecode'>syncthing</span> — two-volume setup for config and shared data, fronted by the <span class='inlinecode'>syncthing.f3s.foo.zone</span> ingress.</li> +<li><span class='inlinecode'>wallabag</span> — read-it-later service with persistent <span class='inlinecode'>data</span> and <span class='inlinecode'>images</span> directories on the NFS export.</li> +</ul><br /> +<span>I hope you enjoyed this walkthrough. In the next part of this series, I will likely tackle monitoring, backup, or observability. I haven't fully decided yet which topic to cover next, so stay tuned!</span><br /> +<br /> +<span>Other *BSD-related posts:</span><br /> +<br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments (You are currently reading this)</a><br /> +<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> +<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> +<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> +<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> +<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br /> +<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br /> +<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br /> +<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let's Encrypt with OpenBSD and Rex</a><br /> +<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br /> +<br /> +<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br /> +<br /> +<a class='textlink' href='../'>Back to the main site</a><br /> +<p class="footer"> + Generated with <a href="https://codeberg.org/snonux/gemtexter">Gemtexter 3.0.1-develop</a> | + served by <a href="https://www.OpenBSD.org">OpenBSD</a>/<a href="https://man.openbsd.org/relayd.8">relayd(8)</a>+<a href="https://man.openbsd.org/httpd.8">httpd(8)</a> | + <a href="https://foo.zone/site-mirrors.html">Site Mirrors</a> + <br /> + Webring: <a href="https://shring.sh/foo.zone/previous">previous</a> | <a href="https://shring.sh">shring</a> | <a href="https://shring.sh/foo.zone/next">next</a> +</p> +</body> +</html> diff --git a/gemfeed/DRAFT-kubernetes-with-freebsd-part-7.html b/gemfeed/DRAFT-kubernetes-with-freebsd-part-7.html deleted file mode 100644 index 53777d6d..00000000 --- a/gemfeed/DRAFT-kubernetes-with-freebsd-part-7.html +++ /dev/null @@ -1,655 +0,0 @@ -<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> -<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> -<head> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> -<title>f3s: Kubernetes with FreeBSD - Part 7: First pod deployments</title> -<link rel="shortcut icon" type="image/gif" href="/favicon.ico" /> -<link rel="stylesheet" href="../style.css" /> -<link rel="stylesheet" href="style-override.css" /> -</head> -<body> -<p class="header"> -<a href="https://foo.zone">Home</a> | <a href="https://codeberg.org/snonux/foo.zone/src/branch/content-md/gemfeed/DRAFT-kubernetes-with-freebsd-part-7.md">Markdown</a> | <a href="gemini://foo.zone/gemfeed/DRAFT-kubernetes-with-freebsd-part-7.gmi">Gemini</a> -</p> -<h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-7-first-pod-deployments'>f3s: Kubernetes with FreeBSD - Part 7: First pod deployments</h1><br /> -<br /> -<span>This is the seventh blog post about the f3s series for self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.</span><br /> -<br /> -<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> -<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> -<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> -<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> -<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> -<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> -<br /> -<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> -<br /> -<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> -<br /> -<ul> -<li><a href='#f3s-kubernetes-with-freebsd---part-7-first-pod-deployments'>f3s: Kubernetes with FreeBSD - Part 7: First pod deployments</a></li> -<li>⇢ <a href='#introduction'>Introduction</a></li> -<li>⇢ <a href='#updating'>Updating</a></li> -<li>⇢ <a href='#installing-k3s'>Installing k3s</a></li> -<li>⇢ ⇢ <a href='#generating-k3stoken-and-starting-first-k3s-node'>Generating <span class='inlinecode'>K3S_TOKEN</span> and starting first k3s node</a></li> -<li>⇢ ⇢ <a href='#adding-the-remaining-nodes-to-the-cluster'>Adding the remaining nodes to the cluster</a></li> -<li>⇢ <a href='#test-deployments'>Test deployments</a></li> -<li>⇢ ⇢ <a href='#test-deployment-to-kubernetes'>Test deployment to Kubernetes</a></li> -<li>⇢ ⇢ <a href='#test-deployment-with-persistent-volume-claim'>Test deployment with persistent volume claim</a></li> -<li>⇢ <a href='#make-it-accessible-from-the-public-internet'>Make it accessible from the public internet</a></li> -<li>⇢ <a href='#failure-test'>Failure test</a></li> -</ul><br /> -<h2 style='display: inline' id='introduction'>Introduction</h2><br /> -<br /> -<h2 style='display: inline' id='updating'>Updating</h2><br /> -<br /> -<span>On all three Rocky Linux 9 boxes <span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, and <span class='inlinecode'>r2</span>:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>dnf update -y -reboot -</pre> -<br /> -<span>On the FreeBSD hosts, upgrading from FreeBSD 14.2 to 14.3-RELEASE, running this on all three hosts <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:~ % doas freebsd-update fetch -paul@f0:~ % doas freebsd-update install -paul@f0:~ % doas reboot -. -. -. -paul@f0:~ % doas freebsd-update -r <font color="#000000">14.3</font>-RELEASE upgrade -paul@f0:~ % doas freebsd-update install -paul@f0:~ % doas freebsd-update install -paul@f0:~ % doas reboot -. -. -. -paul@f0:~ % doas freebsd-update install -paul@f0:~ % doas pkg update -paul@f0:~ % doas pkg upgrade -paul@f0:~ % doas reboot -. -. -. -paul@f0:~ % uname -a -FreeBSD f0.lan.buetow.org <font color="#000000">14.3</font>-RELEASE FreeBSD <font color="#000000">14.3</font>-RELEASE - releng/<font color="#000000">14.3</font>-n<font color="#000000">271432</font>-8c9ce319fef7 GENERIC amd64 -</pre> -<br /> -<h2 style='display: inline' id='installing-k3s'>Installing k3s</h2><br /> -<br /> -<h3 style='display: inline' id='generating-k3stoken-and-starting-first-k3s-node'>Generating <span class='inlinecode'>K3S_TOKEN</span> and starting first k3s node</h3><br /> -<br /> -<span>Generating the k3s token on my Fedora Laptop with <span class='inlinecode'>pwgen -n 32</span> and selected one. And then on all 3 <span class='inlinecode'>r</span> hosts (replace SECRET_TOKEN with the actual secret!! before running the following command) run:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>[root@r0 ~]<i><font color="silver"># echo -n SECRET_TOKEN > ~/.k3s_token</font></i> -</pre> -<br /> -<span>The following steps are also documented on the k3s website:</span><br /> -<br /> -<a class='textlink' href='https://docs.k3s.io/datastore/ha-embedded'>https://docs.k3s.io/datastore/ha-embedded</a><br /> -<br /> -<span>So on <span class='inlinecode'>r0</span> we run:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>[root@r0 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i> - sh -s - server --cluster-init --tls-san=r0.wg0.wan.buetow.org -[INFO] Finding release <b><u><font color="#000000">for</font></u></b> channel stable -[INFO] Using v1.<font color="#000000">32.6</font>+k3s1 as release -. -. -. -[INFO] systemd: Starting k3s -</pre> -<br /> -<h3 style='display: inline' id='adding-the-remaining-nodes-to-the-cluster'>Adding the remaining nodes to the cluster</h3><br /> -<br /> -<span>And we run on the other two nodes <span class='inlinecode'>r1</span> and <span class='inlinecode'>r2</span>:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>[root@r1 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i> - sh -s - server --server https://r<font color="#000000">0</font>.wg0.wan.buetow.org:<font color="#000000">6443</font> \ - --tls-san=r1.wg0.wan.buetow.org - -[root@r2 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i> - sh -s - server --server https://r<font color="#000000">0</font>.wg0.wan.buetow.org:<font color="#000000">6443</font> \ - --tls-san=r2.wg0.wan.buetow.org -. -. -. - -</pre> -<br /> -<span>Once done, we've got a 3 node Kubernetes cluster control plane:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>[root@r0 ~]<i><font color="silver"># kubectl get nodes</font></i> -NAME STATUS ROLES AGE VERSION -r0.lan.buetow.org Ready control-plane,etcd,master 4m44s v1.<font color="#000000">32.6</font>+k3s1 -r1.lan.buetow.org Ready control-plane,etcd,master 3m13s v1.<font color="#000000">32.6</font>+k3s1 -r2.lan.buetow.org Ready control-plane,etcd,master 30s v1.<font color="#000000">32.6</font>+k3s1 - -[root@r0 ~]<i><font color="silver"># kubectl get pods --all-namespaces</font></i> -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system coredns-5688667fd4-fs2jj <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 5m27s -kube-system helm-install-traefik-crd-f9hgd <font color="#000000">0</font>/<font color="#000000">1</font> Completed <font color="#000000">0</font> 5m27s -kube-system helm-install-traefik-zqqqk <font color="#000000">0</font>/<font color="#000000">1</font> Completed <font color="#000000">2</font> 5m27s -kube-system local-path-provisioner-774c6665dc-jqlnc <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 5m27s -kube-system metrics-server-6f4c6675d5-5xpmp <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 5m27s -kube-system svclb-traefik-411cec5b-cdp2l <font color="#000000">2</font>/<font color="#000000">2</font> Running <font color="#000000">0</font> 78s -kube-system svclb-traefik-411cec5b-f625r <font color="#000000">2</font>/<font color="#000000">2</font> Running <font color="#000000">0</font> 4m58s -kube-system svclb-traefik-411cec5b-twrd<font color="#000000">7</font> <font color="#000000">2</font>/<font color="#000000">2</font> Running <font color="#000000">0</font> 4m2s -kube-system traefik-c98fdf6fb-lt6fx <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 4m58s -</pre> -<br /> -<span>In order to connect with <span class='inlinecode'>kubect</span> from my Fedora Laptop, I had to copy <span class='inlinecode'>/etc/rancher/k3s/k3s.yaml</span> from <span class='inlinecode'>r0</span> to <span class='inlinecode'>~/.kube/config</span> and then replace the value of the server field with <span class='inlinecode'>r0.lan.buetow.org</span>. kubectl can now manage the cluster. Note this step has to be repeated when we want to connect to another node of the cluster (e.g. when <span class='inlinecode'>r0</span> is down).</span><br /> -<br /> -<h2 style='display: inline' id='test-deployments'>Test deployments</h2><br /> -<br /> -<h3 style='display: inline' id='test-deployment-to-kubernetes'>Test deployment to Kubernetes</h3><br /> -<br /> -<span>Let's create a test namespace:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>> ~ kubectl create namespace <b><u><font color="#000000">test</font></u></b> -namespace/test created - -> ~ kubectl get namespaces -NAME STATUS AGE -default Active 6h11m -kube-node-lease Active 6h11m -kube-public Active 6h11m -kube-system Active 6h11m -<b><u><font color="#000000">test</font></u></b> Active 5s - -> ~ kubectl config set-context --current --namespace=<b><u><font color="#000000">test</font></u></b> -Context <font color="#808080">"default"</font> modified. -</pre> -<br /> -<span>And let's also create an apache test pod:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>> ~ cat <<END > apache-deployment.yaml -<i><font color="silver"># Apache HTTP Server Deployment</font></i> -apiVersion: apps/v<font color="#000000">1</font> -kind: Deployment -metadata: - name: apache-deployment -spec: - replicas: <font color="#000000">1</font> - selector: - matchLabels: - app: apache - template: - metadata: - labels: - app: apache - spec: - containers: - - name: apache - image: httpd:latest - ports: - <i><font color="silver"># Container port where Apache listens</font></i> - - containerPort: <font color="#000000">80</font> -END - -> ~ kubectl apply -f apache-deployment.yaml -deployment.apps/apache-deployment created - -> ~ kubectl get all -NAME READY STATUS RESTARTS AGE -pod/apache-deployment-5fd955856f-4pjmf <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 7s - -NAME READY UP-TO-DATE AVAILABLE AGE -deployment.apps/apache-deployment <font color="#000000">1</font>/<font color="#000000">1</font> <font color="#000000">1</font> <font color="#000000">1</font> 7s - -NAME DESIRED CURRENT READY AGE -replicaset.apps/apache-deployment-5fd955856f <font color="#000000">1</font> <font color="#000000">1</font> <font color="#000000">1</font> 7s -</pre> -<br /> -<span>Let's also create a service: </span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>> ~ cat <<END > apache-service.yaml -apiVersion: v1 -kind: Service -metadata: - labels: - app: apache - name: apache-service -spec: - ports: - - name: web - port: <font color="#000000">80</font> - protocol: TCP - <i><font color="silver"># Expose port 80 on the service</font></i> - targetPort: <font color="#000000">80</font> - selector: - <i><font color="silver"># Link this service to pods with the label app=apache</font></i> - app: apache -END - -> ~ kubectl apply -f apache-service.yaml -service/apache-service created - -> ~ kubectl get service -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -apache-service ClusterIP <font color="#000000">10.43</font>.<font color="#000000">249.165</font> <none> <font color="#000000">80</font>/TCP 4s -</pre> -<br /> -<span>And also an ingress:</span><br /> -<br /> -<span class='quote'>Note: I've modified the hosts listed in this example after I've published this blog post. This is to ensure that there aren't any bots scarping it.</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>> ~ cat <<END > apache-ingress.yaml - -apiVersion: networking.k8s.io/v<font color="#000000">1</font> -kind: Ingress -metadata: - name: apache-ingress - namespace: <b><u><font color="#000000">test</font></u></b> - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: f3s.foo.zone - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: <font color="#000000">80</font> - - host: standby.f3s.foo.zone - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: <font color="#000000">80</font> - - host: www.f3s.foo.zone - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: <font color="#000000">80</font> -END - -> ~ kubectl apply -f apache-ingress.yaml -ingress.networking.k8s.io/apache-ingress created - -> ~ kubectl describe ingress -Name: apache-ingress -Labels: <none> -Namespace: <b><u><font color="#000000">test</font></u></b> -Address: <font color="#000000">192.168</font>.<font color="#000000">1.120</font>,<font color="#000000">192.168</font>.<font color="#000000">1.121</font>,<font color="#000000">192.168</font>.<font color="#000000">1.122</font> -Ingress Class: traefik -Default backend: <default> -Rules: - Host Path Backends - ---- ---- -------- - f3s.foo.zone - / apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>) - standby.f3s.foo.zone - / apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>) - www.f3s.foo.zone - / apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>) -Annotations: spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -Events: <none> -</pre> -<br /> -<span>Notes: </span><br /> -<br /> -<ul> -<li>I've modified the ingress hosts after I'd published this blog post. This is to ensure that there aren't any bots scarping it.</li> -<li>In the ingress we use plain http (web) for the traefik rule, as all the "production" traefic will routed through a WireGuard tunnel anyway as we will see later.</li> -</ul><br /> -<span>So let's test the Apache webserver through the ingress rule:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>> ~ curl -H <font color="#808080">"Host: www.f3s.foo.zone"</font> http://r<font color="#000000">0</font>.lan.buetow.org:<font color="#000000">80</font> -<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> -</pre> -<br /> -<h3 style='display: inline' id='test-deployment-with-persistent-volume-claim'>Test deployment with persistent volume claim</h3><br /> -<br /> -<span>So let's modify the Apache example to serve the <span class='inlinecode'>htdocs</span> directory from the NFS share we created in the previous blog post. We are using the following manifests. The majority of the manifests are the same as before, except for the persistent volume claim and the volume mount in the Apache deployment.</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>> ~ cat <<END > apache-deployment.yaml -<i><font color="silver"># Apache HTTP Server Deployment</font></i> -apiVersion: apps/v<font color="#000000">1</font> -kind: Deployment -metadata: - name: apache-deployment - namespace: <b><u><font color="#000000">test</font></u></b> -spec: - replicas: <font color="#000000">2</font> - selector: - matchLabels: - app: apache - template: - metadata: - labels: - app: apache - spec: - containers: - - name: apache - image: httpd:latest - ports: - <i><font color="silver"># Container port where Apache listens</font></i> - - containerPort: <font color="#000000">80</font> - readinessProbe: - httpGet: - path: / - port: <font color="#000000">80</font> - initialDelaySeconds: <font color="#000000">5</font> - periodSeconds: <font color="#000000">10</font> - livenessProbe: - httpGet: - path: / - port: <font color="#000000">80</font> - initialDelaySeconds: <font color="#000000">15</font> - periodSeconds: <font color="#000000">10</font> - volumeMounts: - - name: apache-htdocs - mountPath: /usr/local/apache<font color="#000000">2</font>/htdocs/ - volumes: - - name: apache-htdocs - persistentVolumeClaim: - claimName: example-apache-pvc -END - -> ~ cat <<END > apache-ingress.yaml -apiVersion: networking.k8s.io/v<font color="#000000">1</font> -kind: Ingress -metadata: - name: apache-ingress - namespace: <b><u><font color="#000000">test</font></u></b> - annotations: - spec.ingressClassName: traefik - traefik.ingress.kubernetes.io/router.entrypoints: web -spec: - rules: - - host: f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: <font color="#000000">80</font> - - host: standby.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: <font color="#000000">80</font> - - host: www.f3s.buetow.org - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: apache-service - port: - number: <font color="#000000">80</font> -END - -> ~ cat <<END > apache-persistent-volume.yaml -apiVersion: v1 -kind: PersistentVolume -metadata: - name: example-apache-pv -spec: - capacity: - storage: 1Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - hostPath: - path: /data/nfs/k3svolumes/example-apache-volume-claim - <b><u><font color="#000000">type</font></u></b>: Directory ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: example-apache-pvc - namespace: <b><u><font color="#000000">test</font></u></b> -spec: - storageClassName: <font color="#808080">""</font> - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi -END - -> ~ cat <<END > apache-service.yaml -apiVersion: v1 -kind: Service -metadata: - labels: - app: apache - name: apache-service - namespace: <b><u><font color="#000000">test</font></u></b> -spec: - ports: - - name: web - port: <font color="#000000">80</font> - protocol: TCP - <i><font color="silver"># Expose port 80 on the service</font></i> - targetPort: <font color="#000000">80</font> - selector: - <i><font color="silver"># Link this service to pods with the label app=apache</font></i> - app: apache -END -</pre> -<br /> -<span>And let's apply the manifests:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>> ~ kubectl apply -f apache-persistent-volume.yaml - kubectl apply -f apache-service.yaml - kubectl apply -f apache-deployment.yaml - kubectl apply -f apache-ingress.yaml -</pre> -<br /> -<span>So looking at the deployment, it failed now, as the directory doesn't exist yet on the NFS share (note, we also increased the replica count to 2, so in case one node goes down, that there is already a replica running on another node for faster failover):</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>> ~ kubectl get pods -NAME READY STATUS RESTARTS AGE -apache-deployment-5b96bd6b6b-fv2jx <font color="#000000">0</font>/<font color="#000000">1</font> ContainerCreating <font color="#000000">0</font> 9m15s -apache-deployment-5b96bd6b6b-ax2ji <font color="#000000">0</font>/<font color="#000000">1</font> ContainerCreating <font color="#000000">0</font> 9m15s - -> ~ kubectl describe pod apache-deployment-5b96bd6b6b-fv2jx | tail -n <font color="#000000">5</font> -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal Scheduled 9m34s default-scheduler Successfully - assigned test/apache-deployment-5b96bd6b6b-fv2jx to r2.lan.buetow.org - Warning FailedMount 80s (x12 over 9m34s) kubelet MountVolume.SetUp - failed <b><u><font color="#000000">for</font></u></b> volume <font color="#808080">"example-apache-pv"</font> : hostPath <b><u><font color="#000000">type</font></u></b> check failed: - /data/nfs/k3svolumes/example-apache is not a directory -</pre> -<br /> -<span>This is on purpose! We need to create the directory on the NFS share first, so let's do that (e.g. on <span class='inlinecode'>r0</span>):</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>[root@r0 ~]<i><font color="silver"># mkdir /data/nfs/k3svolumes/example-apache-volume-claim/</font></i> - -[root@r0 ~ ] cat <<END > /data/nfs/k3svolumes/example-apache-volume-claim/index.html -<!DOCTYPE html> -<html> -<head> - <title>Hello, it works</title> -</head> -<body> - <h1>Hello, it works!</h<font color="#000000">1</font>> - <p>This site is served via a PVC!</p> -</body> -</html> -END -</pre> -<br /> -<span>The <span class='inlinecode'>index.html</span> file was also created to serve content along the way. After deleting the pod, it recreates itself, and the volume mounts correctly:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>> ~ kubectl delete pod apache-deployment-5b96bd6b6b-fv2jx - -> ~ curl -H <font color="#808080">"Host: www.f3s.buetow.org"</font> http://r<font color="#000000">0</font>.lan.buetow.org:<font color="#000000">80</font> -<!DOCTYPE html> -<html> -<head> - <title>Hello, it works</title> -</head> -<body> - <h1>Hello, it works!</h<font color="#000000">1</font>> - <p>This site is served via a PVC!</p> -</body> -</html> -</pre> -<br /> -<h2 style='display: inline' id='make-it-accessible-from-the-public-internet'>Make it accessible from the public internet</h2><br /> -<br /> -<span>Next, this should be made accessible through the public internet via the <span class='inlinecode'>www.f3s.foo.zone</span> hosts. As a reminder, refer back to part 1 of this series and review the section titled "OpenBSD/relayd to the rescue for external connectivity":</span><br /> -<br /> -<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> -<br /> -<span class='quote'>All apps should be reachable through the internet (e.g., from my phone or computer when travelling). For external connectivity and TLS management, I've got two OpenBSD VMs (one hosted by OpenBSD Amsterdam and another hosted by Hetzner) handling public-facing services like DNS, relaying traffic, and automating Let's Encrypt certificates.</span><br /> -<br /> -<span class='quote'>All of this (every Linux VM to every OpenBSD box) will be connected via WireGuard tunnels, keeping everything private and secure. There will be 6 WireGuard tunnels (3 k3s nodes times two OpenBSD VMs).</span><br /> -<br /> -<span class='quote'>So, when I want to access a service running in k3s, I will hit an external DNS endpoint (with the authoritative DNS servers being the OpenBSD boxes). The DNS will resolve to the master OpenBSD VM (see my KISS highly-available with OpenBSD blog post), and from there, the relayd process (with a Let's Encrypt certificate—see my Let's Encrypt with OpenBSD and Rex blog post) will accept the TCP connection and forward it through the WireGuard tunnel to a reachable node port of one of the k3s nodes, thus serving the traffic.</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>> ~ curl https://f3s.foo.zone -<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> - -> ~ curl https://www.f3s.foo.zone -<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> - -> ~ curl https://standby.f3s.foo.zone -<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> -</pre> -<br /> -<h2 style='display: inline' id='failure-test'>Failure test</h2><br /> -<br /> -<span>Shutting down <span class='inlinecode'>f0</span> and let NFS failing over for the Apache content.</span><br /> -<br /> -<br /> -<span>TODO: openbsd relayd config</span><br /> -<span>TODO: registry howto</span><br /> -<span>TODO: anki-droid deployment</span><br /> -<span>TODO: include k9s screenshot</span><br /> -<span>TODO: include a diagram again?</span><br /> -<span>TODO: increase replica of traefik to 2, persist config surviving reboots</span><br /> -<span>TODO: fix check-mounts script (mountpoint command and stale mounts... differentiate better)</span><br /> -<span>TODO: remove traefic metal lb pods? persist the change?</span><br /> -<span>TODO: use helm charts examples, but only after the initial apache example...</span><br /> -<span>TODO: how to set up the users for the NFSv4 user mapping (same user with same UIDs i ncontainer, on Rocky and on FreeBSD). also ensure, that the <span class='inlinecode'>id</span> command shows all the same. as there may be already entries/duplicates in the passwd files (e.g. tape group, etc)</span><br /> -<br /> -<span>Other *BSD-related posts:</span><br /> -<br /> -<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> -<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> -<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> -<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> -<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> -<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> -<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br /> -<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br /> -<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br /> -<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let's Encrypt with OpenBSD and Rex</a><br /> -<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br /> -<br /> -<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br /> -<br /> -<a class='textlink' href='../'>Back to the main site</a><br /> -<br /> -<br /> -<span>Note, that I've modified the hosts after I'd published this blog post. This is to ensure that there aren't any bots scarping it.</span><br /> -<p class="footer"> - Generated with <a href="https://codeberg.org/snonux/gemtexter">Gemtexter 3.0.1-develop</a> | - served by <a href="https://www.OpenBSD.org">OpenBSD</a>/<a href="https://man.openbsd.org/relayd.8">relayd(8)</a>+<a href="https://man.openbsd.org/httpd.8">httpd(8)</a> | - <a href="https://foo.zone/site-mirrors.html">Site Mirrors</a> - <br /> - Webring: <a href="https://shring.sh/foo.zone/previous">previous</a> | <a href="https://shring.sh">shring</a> | <a href="https://shring.sh/foo.zone/next">next</a> -</p> -</body> -</html> diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml index f90a0498..36f04d8e 100644 --- a/gemfeed/atom.xml +++ b/gemfeed/atom.xml @@ -1,12 +1,1091 @@ <?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom"> - <updated>2025-09-29T09:38:00+03:00</updated> + <updated>2025-10-02T11:27:20+03:00</updated> <title>foo.zone feed</title> <subtitle>To be in the .zone!</subtitle> <link href="https://foo.zone/gemfeed/atom.xml" rel="self" /> <link href="https://foo.zone/" /> <id>https://foo.zone/</id> <entry> + <title>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</title> + <link href="https://foo.zone/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html" /> + <id>https://foo.zone/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html</id> + <updated>2025-10-02T11:27:19+03:00</updated> + <author> + <name>Paul Buetow aka snonux</name> + <email>paul@dev.buetow.org</email> + </author> + <summary>This is the seventh blog post about the f3s series for my self-hosting demands in a home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.</summary> + <content type="xhtml"> + <div xmlns="http://www.w3.org/1999/xhtml"> + <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-7-k3s-and-first-pod-deployments'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</h1><br /> +<br /> +<span>This is the seventh blog post about the f3s series for my self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.</span><br /> +<br /> +<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> +<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> +<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> +<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> +<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments (You are currently reading this)</a><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> +<br /> +<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> +<br /> +<ul> +<li><a href='#f3s-kubernetes-with-freebsd---part-7-k3s-and-first-pod-deployments'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a></li> +<li>⇢ <a href='#introduction'>Introduction</a></li> +<li>⇢ <a href='#updating'>Updating</a></li> +<li>⇢ <a href='#installing-k3s'>Installing k3s</a></li> +<li>⇢ ⇢ <a href='#generating-k3stoken-and-starting-the-first-k3s-node'>Generating <span class='inlinecode'>K3S_TOKEN</span> and starting the first k3s node</a></li> +<li>⇢ ⇢ <a href='#adding-the-remaining-nodes-to-the-cluster'>Adding the remaining nodes to the cluster</a></li> +<li>⇢ <a href='#test-deployments'>Test deployments</a></li> +<li>⇢ ⇢ <a href='#test-deployment-to-kubernetes'>Test deployment to Kubernetes</a></li> +<li>⇢ ⇢ <a href='#test-deployment-with-persistent-volume-claim'>Test deployment with persistent volume claim</a></li> +<li>⇢ ⇢ <a href='#scaling-traefik-for-faster-failover'>Scaling Traefik for faster failover</a></li> +<li>⇢ <a href='#make-it-accessible-from-the-public-internet'>Make it accessible from the public internet</a></li> +<li>⇢ ⇢ <a href='#openbsd-relayd-configuration'>OpenBSD relayd configuration</a></li> +<li>⇢ <a href='#deploying-the-private-docker-image-registry'>Deploying the private Docker image registry</a></li> +<li>⇢ ⇢ <a href='#prepare-the-nfs-backed-storage'>Prepare the NFS-backed storage</a></li> +<li>⇢ ⇢ <a href='#install-or-upgrade-the-chart'>Install (or upgrade) the chart</a></li> +<li>⇢ ⇢ <a href='#allow-nodes-and-workstations-to-trust-the-registry'>Allow nodes and workstations to trust the registry</a></li> +<li>⇢ ⇢ <a href='#pushing-and-pulling-images'>Pushing and pulling images</a></li> +<li>⇢ <a href='#example-anki-sync-server-from-the-private-registry'>Example: Anki Sync Server from the private registry</a></li> +<li>⇢ ⇢ <a href='#build-and-push-the-image'>Build and push the image</a></li> +<li>⇢ ⇢ <a href='#create-the-anki-secret-and-storage-on-the-cluster'>Create the Anki secret and storage on the cluster</a></li> +<li>⇢ ⇢ <a href='#deploy-the-chart'>Deploy the chart</a></li> +<li>⇢ <a href='#nfsv4-uid-mapping-for-postgres-backed-and-other-apps'>NFSv4 UID mapping for Postgres-backed (and other) apps</a></li> +<li>⇢ ⇢ <a href='#helm-charts-currently-in-service'>Helm charts currently in service</a></li> +</ul><br /> +<h2 style='display: inline' id='introduction'>Introduction</h2><br /> +<br /> +<span>In this blog post, I am finally going to install k3s (the Kubernetes distribution I use) to the whole setup and deploy the first workloads (helm charts, and a private registry) to it.</span><br /> +<br /> +<a class='textlink' href='https://k3s.io'>https://k3s.io</a><br /> +<br /> +<h2 style='display: inline' id='updating'>Updating</h2><br /> +<br /> +<span>Before proceeding, I bring all systems involved up-to-date. On all three Rocky Linux 9 boxes <span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, and <span class='inlinecode'>r2</span>:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>dnf update -y +reboot +</pre> +<br /> +<span>On the FreeBSD hosts, I upgraded from FreeBSD 14.2 to 14.3-RELEASE, running this on all three hosts <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas freebsd-update fetch +paul@f0:~ % doas freebsd-update install +paul@f0:~ % doas reboot +. +. +. +paul@f0:~ % doas freebsd-update -r <font color="#000000">14.3</font>-RELEASE upgrade +paul@f0:~ % doas freebsd-update install +paul@f0:~ % doas freebsd-update install +paul@f0:~ % doas reboot +. +. +. +paul@f0:~ % doas freebsd-update install +paul@f0:~ % doas pkg update +paul@f0:~ % doas pkg upgrade +paul@f0:~ % doas reboot +. +. +. +paul@f0:~ % uname -a +FreeBSD f0.lan.buetow.org <font color="#000000">14.3</font>-RELEASE FreeBSD <font color="#000000">14.3</font>-RELEASE + releng/<font color="#000000">14.3</font>-n<font color="#000000">271432</font>-8c9ce319fef7 GENERIC amd64 +</pre> +<br /> +<h2 style='display: inline' id='installing-k3s'>Installing k3s</h2><br /> +<br /> +<h3 style='display: inline' id='generating-k3stoken-and-starting-the-first-k3s-node'>Generating <span class='inlinecode'>K3S_TOKEN</span> and starting the first k3s node</h3><br /> +<br /> +<span>I generated the k3s token on my Fedora laptop with <span class='inlinecode'>pwgen -n 32</span> and selected one of the results. Then, on all three <span class='inlinecode'>r</span> hosts, I ran the following (replace SECRET_TOKEN with the actual secret):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># echo -n SECRET_TOKEN > ~/.k3s_token</font></i> +</pre> +<br /> +<span>The following steps are also documented on the k3s website:</span><br /> +<br /> +<a class='textlink' href='https://docs.k3s.io/datastore/ha-embedded'>https://docs.k3s.io/datastore/ha-embedded</a><br /> +<br /> +<span>To bootstrap k3s on the first node, I ran this on <span class='inlinecode'>r0</span>:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i> + sh -s - server --cluster-init --tls-san=r0.wg0.wan.buetow.org +[INFO] Finding release <b><u><font color="#000000">for</font></u></b> channel stable +[INFO] Using v1.<font color="#000000">32.6</font>+k3s1 as release +. +. +. +[INFO] systemd: Starting k3s +</pre> +<br /> +<h3 style='display: inline' id='adding-the-remaining-nodes-to-the-cluster'>Adding the remaining nodes to the cluster</h3><br /> +<br /> +<span>Then I ran on the other two nodes <span class='inlinecode'>r1</span> and <span class='inlinecode'>r2</span>:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r1 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i> + sh -s - server --server https://r<font color="#000000">0</font>.wg0.wan.buetow.org:<font color="#000000">6443</font> \ + --tls-san=r1.wg0.wan.buetow.org + +[root@r2 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i> + sh -s - server --server https://r<font color="#000000">0</font>.wg0.wan.buetow.org:<font color="#000000">6443</font> \ + --tls-san=r2.wg0.wan.buetow.org +. +. +. + +</pre> +<br /> +<span>Once done, I had a three-node Kubernetes cluster control plane:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># kubectl get nodes</font></i> +NAME STATUS ROLES AGE VERSION +r0.lan.buetow.org Ready control-plane,etcd,master 4m44s v1.<font color="#000000">32.6</font>+k3s1 +r1.lan.buetow.org Ready control-plane,etcd,master 3m13s v1.<font color="#000000">32.6</font>+k3s1 +r2.lan.buetow.org Ready control-plane,etcd,master 30s v1.<font color="#000000">32.6</font>+k3s1 + +[root@r0 ~]<i><font color="silver"># kubectl get pods --all-namespaces</font></i> +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system coredns-5688667fd4-fs2jj <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 5m27s +kube-system helm-install-traefik-crd-f9hgd <font color="#000000">0</font>/<font color="#000000">1</font> Completed <font color="#000000">0</font> 5m27s +kube-system helm-install-traefik-zqqqk <font color="#000000">0</font>/<font color="#000000">1</font> Completed <font color="#000000">2</font> 5m27s +kube-system local-path-provisioner-774c6665dc-jqlnc <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 5m27s +kube-system metrics-server-6f4c6675d5-5xpmp <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 5m27s +kube-system svclb-traefik-411cec5b-cdp2l <font color="#000000">2</font>/<font color="#000000">2</font> Running <font color="#000000">0</font> 78s +kube-system svclb-traefik-411cec5b-f625r <font color="#000000">2</font>/<font color="#000000">2</font> Running <font color="#000000">0</font> 4m58s +kube-system svclb-traefik-411cec5b-twrd<font color="#000000">7</font> <font color="#000000">2</font>/<font color="#000000">2</font> Running <font color="#000000">0</font> 4m2s +kube-system traefik-c98fdf6fb-lt6fx <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 4m58s +</pre> +<br /> +<span>In order to connect with <span class='inlinecode'>kubectl</span> from my Fedora laptop, I had to copy <span class='inlinecode'>/etc/rancher/k3s/k3s.yaml</span> from <span class='inlinecode'>r0</span> to <span class='inlinecode'>~/.kube/config</span> and then replace the value of the server field with <span class='inlinecode'>r0.lan.buetow.org</span>. kubectl can now manage the cluster. Note that this step has to be repeated when I want to connect to another node of the cluster (e.g. when <span class='inlinecode'>r0</span> is down).</span><br /> +<br /> +<h2 style='display: inline' id='test-deployments'>Test deployments</h2><br /> +<br /> +<h3 style='display: inline' id='test-deployment-to-kubernetes'>Test deployment to Kubernetes</h3><br /> +<br /> +<span>Let's create a test namespace:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl create namespace <b><u><font color="#000000">test</font></u></b> +namespace/test created + +> ~ kubectl get namespaces +NAME STATUS AGE +default Active 6h11m +kube-node-lease Active 6h11m +kube-public Active 6h11m +kube-system Active 6h11m +<b><u><font color="#000000">test</font></u></b> Active 5s + +> ~ kubectl config set-context --current --namespace=<b><u><font color="#000000">test</font></u></b> +Context <font color="#808080">"default"</font> modified. +</pre> +<br /> +<span>And let's also create an Apache test pod:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ cat <<END > apache-deployment.yaml +<i><font color="silver"># Apache HTTP Server Deployment</font></i> +apiVersion: apps/v<font color="#000000">1</font> +kind: Deployment +metadata: + name: apache-deployment +spec: + replicas: <font color="#000000">1</font> + selector: + matchLabels: + app: apache + template: + metadata: + labels: + app: apache + spec: + containers: + - name: apache + image: httpd:latest + ports: + <i><font color="silver"># Container port where Apache listens</font></i> + - containerPort: <font color="#000000">80</font> +END + +> ~ kubectl apply -f apache-deployment.yaml +deployment.apps/apache-deployment created + +> ~ kubectl get all +NAME READY STATUS RESTARTS AGE +pod/apache-deployment-5fd955856f-4pjmf <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">0</font> 7s + +NAME READY UP-TO-DATE AVAILABLE AGE +deployment.apps/apache-deployment <font color="#000000">1</font>/<font color="#000000">1</font> <font color="#000000">1</font> <font color="#000000">1</font> 7s + +NAME DESIRED CURRENT READY AGE +replicaset.apps/apache-deployment-5fd955856f <font color="#000000">1</font> <font color="#000000">1</font> <font color="#000000">1</font> 7s +</pre> +<br /> +<span>Let's also create a service: </span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ cat <<END > apache-service.yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app: apache + name: apache-service +spec: + ports: + - name: web + port: <font color="#000000">80</font> + protocol: TCP + <i><font color="silver"># Expose port 80 on the service</font></i> + targetPort: <font color="#000000">80</font> + selector: + <i><font color="silver"># Link this service to pods with the label app=apache</font></i> + app: apache +END + +> ~ kubectl apply -f apache-service.yaml +service/apache-service created + +> ~ kubectl get service +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +apache-service ClusterIP <font color="#000000">10.43</font>.<font color="#000000">249.165</font> <none> <font color="#000000">80</font>/TCP 4s +</pre> +<br /> +<span>Now let's create an ingress:</span><br /> +<br /> +<span class='quote'>Note: I've modified the hosts listed in this example after I published this blog post to ensure that there aren't any bots scraping it.</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ cat <<END > apache-ingress.yaml + +apiVersion: networking.k8s.io/v<font color="#000000">1</font> +kind: Ingress +metadata: + name: apache-ingress + namespace: <b><u><font color="#000000">test</font></u></b> + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> + - host: standby.f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> + - host: www.f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> +END + +> ~ kubectl apply -f apache-ingress.yaml +ingress.networking.k8s.io/apache-ingress created + +> ~ kubectl describe ingress +Name: apache-ingress +Labels: <none> +Namespace: <b><u><font color="#000000">test</font></u></b> +Address: <font color="#000000">192.168</font>.<font color="#000000">1.120</font>,<font color="#000000">192.168</font>.<font color="#000000">1.121</font>,<font color="#000000">192.168</font>.<font color="#000000">1.122</font> +Ingress Class: traefik +Default backend: <default> +Rules: + Host Path Backends + ---- ---- -------- + f3s.foo.zone + / apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>) + standby.f3s.foo.zone + / apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>) + www.f3s.foo.zone + / apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>) +Annotations: spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +Events: <none> +</pre> +<br /> +<span>Notes: </span><br /> +<br /> +<ul> +<li>In the ingress, I use plain HTTP (web) for the Traefik rule, as all the "production" traffic will be routed through a WireGuard tunnel anyway, as I will show later.</li> +</ul><br /> +<span>So I tested the Apache web server through the ingress rule:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ curl -H <font color="#808080">"Host: www.f3s.foo.zone"</font> http://r<font color="#000000">0</font>.lan.buetow.org:<font color="#000000">80</font> +<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> +</pre> +<br /> +<h3 style='display: inline' id='test-deployment-with-persistent-volume-claim'>Test deployment with persistent volume claim</h3><br /> +<br /> +<span>Next, I modified the Apache example to serve the <span class='inlinecode'>htdocs</span> directory from the NFS share I created in the previous blog post. I used the following manifests. Most of them are the same as before, except for the persistent volume claim and the volume mount in the Apache deployment.</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ cat <<END > apache-deployment.yaml +<i><font color="silver"># Apache HTTP Server Deployment</font></i> +apiVersion: apps/v<font color="#000000">1</font> +kind: Deployment +metadata: + name: apache-deployment + namespace: <b><u><font color="#000000">test</font></u></b> +spec: + replicas: <font color="#000000">2</font> + selector: + matchLabels: + app: apache + template: + metadata: + labels: + app: apache + spec: + containers: + - name: apache + image: httpd:latest + ports: + <i><font color="silver"># Container port where Apache listens</font></i> + - containerPort: <font color="#000000">80</font> + readinessProbe: + httpGet: + path: / + port: <font color="#000000">80</font> + initialDelaySeconds: <font color="#000000">5</font> + periodSeconds: <font color="#000000">10</font> + livenessProbe: + httpGet: + path: / + port: <font color="#000000">80</font> + initialDelaySeconds: <font color="#000000">15</font> + periodSeconds: <font color="#000000">10</font> + volumeMounts: + - name: apache-htdocs + mountPath: /usr/local/apache<font color="#000000">2</font>/htdocs/ + volumes: + - name: apache-htdocs + persistentVolumeClaim: + claimName: example-apache-pvc +END + +> ~ cat <<END > apache-ingress.yaml +apiVersion: networking.k8s.io/v<font color="#000000">1</font> +kind: Ingress +metadata: + name: apache-ingress + namespace: <b><u><font color="#000000">test</font></u></b> + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> + - host: standby.f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> + - host: www.f3s.foo.zone + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: <font color="#000000">80</font> +END + +> ~ cat <<END > apache-persistent-volume.yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: example-apache-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/example-apache-volume-claim + <b><u><font color="#000000">type</font></u></b>: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: example-apache-pvc + namespace: <b><u><font color="#000000">test</font></u></b> +spec: + storageClassName: <font color="#808080">""</font> + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +END + +> ~ cat <<END > apache-service.yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app: apache + name: apache-service + namespace: <b><u><font color="#000000">test</font></u></b> +spec: + ports: + - name: web + port: <font color="#000000">80</font> + protocol: TCP + <i><font color="silver"># Expose port 80 on the service</font></i> + targetPort: <font color="#000000">80</font> + selector: + <i><font color="silver"># Link this service to pods with the label app=apache</font></i> + app: apache +END +</pre> +<br /> +<span>I applied the manifests:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl apply -f apache-persistent-volume.yaml +> ~ kubectl apply -f apache-service.yaml +> ~ kubectl apply -f apache-deployment.yaml +> ~ kubectl apply -f apache-ingress.yaml +</pre> +<br /> +<span>Looking at the deployment, I could see it failed because the directory didn't exist yet on the NFS share (note that I also increased the replica count to 2 so if one node goes down there's already a replica running on another node for faster failover):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl get pods +NAME READY STATUS RESTARTS AGE +apache-deployment-5b96bd6b6b-fv2jx <font color="#000000">0</font>/<font color="#000000">1</font> ContainerCreating <font color="#000000">0</font> 9m15s +apache-deployment-5b96bd6b6b-ax2ji <font color="#000000">0</font>/<font color="#000000">1</font> ContainerCreating <font color="#000000">0</font> 9m15s + +> ~ kubectl describe pod apache-deployment-5b96bd6b6b-fv2jx | tail -n <font color="#000000">5</font> +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 9m34s default-scheduler Successfully + assigned test/apache-deployment-5b96bd6b6b-fv2jx to r2.lan.buetow.org + Warning FailedMount 80s (x12 over 9m34s) kubelet MountVolume.SetUp + failed <b><u><font color="#000000">for</font></u></b> volume <font color="#808080">"example-apache-pv"</font> : hostPath <b><u><font color="#000000">type</font></u></b> check failed: + /data/nfs/k3svolumes/example-apache is not a directory +</pre> +<br /> +<span>That's intentional—I needed to create the directory on the NFS share first, so I did that (e.g. on <span class='inlinecode'>r0</span>):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># mkdir /data/nfs/k3svolumes/example-apache-volume-claim/</font></i> + +[root@r0 ~]<i><font color="silver"># cat <<END > /data/nfs/k3svolumes/example-apache-volume-claim/index.html</font></i> +<!DOCTYPE html> +<html> +<head> + <title>Hello, it works</title> +</head> +<body> + <h1>Hello, it works!</h<font color="#000000">1</font>> + <p>This site is served via a PVC!</p> +</body> +</html> +END +</pre> +<br /> +<span>The <span class='inlinecode'>index.html</span> file gives us some actual content to serve. After deleting the pod, it recreates itself and the volume mounts correctly:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl delete pod apache-deployment-5b96bd6b6b-fv2jx + +> ~ curl -H <font color="#808080">"Host: www.f3s.foo.zone"</font> http://r<font color="#000000">0</font>.lan.buetow.org:<font color="#000000">80</font> +<!DOCTYPE html> +<html> +<head> + <title>Hello, it works</title> +</head> +<body> + <h1>Hello, it works!</h<font color="#000000">1</font>> + <p>This site is served via a PVC!</p> +</body> +</html> +</pre> +<br /> +<h3 style='display: inline' id='scaling-traefik-for-faster-failover'>Scaling Traefik for faster failover</h3><br /> +<br /> +<span>Traefik (used for ingress on k3s) ships with a single replica by default, but for faster failover I bumped it to two replicas so each worker node runs one pod. That way, if a node disappears, the service stays up while Kubernetes schedules a replacement. Here's the command I used:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl -n kube-system scale deployment traefik --replicas=<font color="#000000">2</font> +</pre> +<br /> +<span>And the result:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl -n kube-system get pods -l app.kubernetes.io/name=traefik +kube-system traefik-c98fdf6fb-97kqk <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">19</font> (53d ago) 64d +kube-system traefik-c98fdf6fb-9npg2 <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">11</font> (53d ago) 61d +</pre> +<br /> +<h2 style='display: inline' id='make-it-accessible-from-the-public-internet'>Make it accessible from the public internet</h2><br /> +<br /> +<span>Next, I made this accessible through the public internet via the <span class='inlinecode'>www.f3s.foo.zone</span> hosts. As a reminder from part 1 of this series, I reviewed the section titled "OpenBSD/relayd to the rescue for external connectivity":</span><br /> +<br /> +<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> +<br /> +<span class='quote'>All apps should be reachable through the internet (e.g., from my phone or computer when travelling). For external connectivity and TLS management, I've got two OpenBSD VMs (one hosted by OpenBSD Amsterdam and another hosted by Hetzner) handling public-facing services like DNS, relaying traffic, and automating Let's Encrypt certificates.</span><br /> +<br /> +<span class='quote'>All of this (every Linux VM to every OpenBSD box) will be connected via WireGuard tunnels, keeping everything private and secure. There will be 6 WireGuard tunnels (3 k3s nodes times two OpenBSD VMs).</span><br /> +<br /> +<span class='quote'>So, when I want to access a service running in k3s, I will hit an external DNS endpoint (with the authoritative DNS servers being the OpenBSD boxes). The DNS will resolve to the master OpenBSD VM (see my KISS highly-available with OpenBSD blog post), and from there, the relayd process (with a Let's Encrypt certificate—see my Let's Encrypt with OpenBSD and Rex blog post) will accept the TCP connection and forward it through the WireGuard tunnel to a reachable node port of one of the k3s nodes, thus serving the traffic.</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ curl https://f3s.foo.zone +<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> + +> ~ curl https://www.f3s.foo.zone +<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> + +> ~ curl https://standby.f3s.foo.zone +<html><body><h1>It works!</h<font color="#000000">1</font>></body></html> +</pre> +<br /> +<span>This is how it works in <span class='inlinecode'>relayd.conf</span> on OpenBSD:</span><br /> +<br /> +<h3 style='display: inline' id='openbsd-relayd-configuration'>OpenBSD relayd configuration</h3><br /> +<br /> +<span>The OpenBSD edge relays keep the Kubernetes-facing addresses for the f3s ingress endpoints in a shared backend table so TLS traffic for every <span class='inlinecode'>f3s</span> hostname lands on the same pool of k3s nodes (pointing to the WireGuard IP addresses of those nodes - remember, they are running locally in my LAN, wheras the OpenBSD edge relays operate in the public internet):</span><br /> +<br /> +<pre> +table <f3s> { + 192.168.2.120 + 192.168.2.121 + 192.168.2.122 +} +</pre> +<br /> +<span>Inside the <span class='inlinecode'>http protocol "https"</span> block each public hostname gets its Let's Encrypt certificate and is matched to that backend table. Besides the primary trio, every service-specific hostname (<span class='inlinecode'>anki</span>, <span class='inlinecode'>bag</span>, <span class='inlinecode'>flux</span>, <span class='inlinecode'>audiobookshelf</span>, <span class='inlinecode'>gpodder</span>, <span class='inlinecode'>radicale</span>, <span class='inlinecode'>vault</span>, <span class='inlinecode'>syncthing</span>, <span class='inlinecode'>uprecords</span>) and their <span class='inlinecode'>www</span> / <span class='inlinecode'>standby</span> aliases reuse the same pool so new apps can go live just by publishing an ingress rule, whereas they will all map to a service running in k3s:</span><br /> +<br /> +<pre> +http protocol "https" { + tls keypair f3s.foo.zone + tls keypair www.f3s.foo.zone + tls keypair standby.f3s.foo.zone + tls keypair anki.f3s.foo.zone + tls keypair www.anki.f3s.foo.zone + tls keypair standby.anki.f3s.foo.zone + tls keypair bag.f3s.foo.zone + tls keypair www.bag.f3s.foo.zone + tls keypair standby.bag.f3s.foo.zone + tls keypair flux.f3s.foo.zone + tls keypair www.flux.f3s.foo.zone + tls keypair standby.flux.f3s.foo.zone + tls keypair audiobookshelf.f3s.foo.zone + tls keypair www.audiobookshelf.f3s.foo.zone + tls keypair standby.audiobookshelf.f3s.foo.zone + tls keypair gpodder.f3s.foo.zone + tls keypair www.gpodder.f3s.foo.zone + tls keypair standby.gpodder.f3s.foo.zone + tls keypair radicale.f3s.foo.zone + tls keypair www.radicale.f3s.foo.zone + tls keypair standby.radicale.f3s.foo.zone + tls keypair vault.f3s.foo.zone + tls keypair www.vault.f3s.foo.zone + tls keypair standby.vault.f3s.foo.zone + tls keypair syncthing.f3s.foo.zone + tls keypair www.syncthing.f3s.foo.zone + tls keypair standby.syncthing.f3s.foo.zone + tls keypair uprecords.f3s.foo.zone + tls keypair www.uprecords.f3s.foo.zone + tls keypair standby.uprecords.f3s.foo.zone + + match request quick header "Host" value "f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "anki.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.anki.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.anki.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "bag.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.bag.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.bag.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "flux.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.flux.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.flux.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "audiobookshelf.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.audiobookshelf.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.audiobookshelf.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "gpodder.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.gpodder.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.gpodder.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "radicale.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.radicale.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.radicale.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "vault.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.vault.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.vault.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "syncthing.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.syncthing.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.syncthing.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "uprecords.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "www.uprecords.f3s.foo.zone" forward to <f3s> + match request quick header "Host" value "standby.uprecords.f3s.foo.zone" forward to <f3s> +} +</pre> +<br /> +<span>Both IPv4 and IPv6 listeners reuse the same protocol definition, making the relay transparent for dual-stack clients while still health checking every k3s backend before forwarding traffic over WireGuard:</span><br /> +<br /> +<pre> +relay "https4" { + listen on 46.23.94.99 port 443 tls + protocol "https" + forward to <f3s> port 80 check tcp +} + +relay "https6" { + listen on 2a03:6000:6f67:624::99 port 443 tls + protocol "https" + forward to <f3s> port 80 check tcp +} +</pre> +<br /> +<span>In practice, that means relayd terminates TLS with the correct certificate, keeps the three WireGuard-connected backends in rotation, and ships each request to whichever bhyve VM answers first.</span><br /> +<br /> +<h2 style='display: inline' id='deploying-the-private-docker-image-registry'>Deploying the private Docker image registry</h2><br /> +<br /> +<span>As not all Docker images I want to deploy are available on public Docker registries and as I also build some of them by myself, there is the need of a private registry. </span><br /> +<br /> +<span>All manifests for the f3s stack live in my configuration repository:</span><br /> +<br /> +<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s'>codeberg.org/snonux/conf/f3s</a><br /> +<br /> +<span>Within that repo, the <span class='inlinecode'>examples/conf/f3s/registry/</span> directory contains the Helm chart, a <span class='inlinecode'>Justfile</span>, and a detailed <span class='inlinecode'>README</span>. Here's the condensed walkthrough I used to roll out the registry with Helm.</span><br /> +<br /> +<h3 style='display: inline' id='prepare-the-nfs-backed-storage'>Prepare the NFS-backed storage</h3><br /> +<br /> +<span>Create the directory that will hold the registry blobs on the NFS share (I ran this on <span class='inlinecode'>r0</span>, but any node that exports <span class='inlinecode'>/data/nfs/k3svolumes</span> works):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># mkdir -p /data/nfs/k3svolumes/registry</font></i> +</pre> +<br /> +<h3 style='display: inline' id='install-or-upgrade-the-chart'>Install (or upgrade) the chart</h3><br /> +<br /> +<span>Clone the repo (or pull the latest changes) on a workstation that has <span class='inlinecode'>helm</span> configured for the cluster, then deploy the chart. The Justfile wraps the commands, but the raw Helm invocation looks like this:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ git clone https://codeberg.org/snonux/conf/f3s.git +$ cd conf/f3s/examples/conf/f3s/registry +$ helm upgrade --install registry ./helm-chart --namespace infra --create-namespace +</pre> +<br /> +<span>Helm creates the <span class='inlinecode'>infra</span> namespace if it does not exist, provisions a <span class='inlinecode'>PersistentVolume</span>/<span class='inlinecode'>PersistentVolumeClaim</span> pair that points at <span class='inlinecode'>/data/nfs/k3svolumes/registry</span>, and spins up a single registry pod exposed via the <span class='inlinecode'>docker-registry-service</span> NodePort (<span class='inlinecode'>30001</span>). Verify everything is up before continuing:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ kubectl get pods --namespace infra +NAME READY STATUS RESTARTS AGE +docker-registry-6bc9bb46bb-6grkr <font color="#000000">1</font>/<font color="#000000">1</font> Running <font color="#000000">6</font> (53d ago) 54d + +$ kubectl get svc docker-registry-service -n infra +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +docker-registry-service NodePort <font color="#000000">10.43</font>.<font color="#000000">141.56</font> <none> <font color="#000000">5000</font>:<font color="#000000">30001</font>/TCP 54d +</pre> +<br /> +<h3 style='display: inline' id='allow-nodes-and-workstations-to-trust-the-registry'>Allow nodes and workstations to trust the registry</h3><br /> +<br /> +<span>The registry listens on plain HTTP, so both Docker daemons on workstations and the k3s nodes need to treat it as an insecure registry. That's fine for my personal needs, as:</span><br /> +<br /> +<ul> +<li>I don't store any secrets in the images</li> +<li>I access the registry this way only via my LAN</li> +<li>I may will change it later on...</li> +</ul><br /> +<span>On my Fedora workstation where I build images:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ cat <<<font color="#808080">"EOF"</font> | sudo tee /etc/docker/daemon.json >/dev/null +{ + <font color="#808080">"insecure-registries"</font>: [ + <font color="#808080">"r0.lan.buetow.org:30001"</font>, + <font color="#808080">"r1.lan.buetow.org:30001"</font>, + <font color="#808080">"r2.lan.buetow.org:30001"</font> + ] +} +EOF +$ sudo systemctl restart docker +</pre> +<br /> +<span>On each k3s node, make <span class='inlinecode'>registry.lan.buetow.org</span> resolve locally and point k3s at the NodePort:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ <b><u><font color="#000000">for</font></u></b> node <b><u><font color="#000000">in</font></u></b> r0 r1 r2; <b><u><font color="#000000">do</font></u></b> +> ssh root@$node <font color="#808080">"echo '127.0.0.1 registry.lan.buetow.org' >> /etc/hosts"</font> +> <b><u><font color="#000000">done</font></u></b> + +$ <b><u><font color="#000000">for</font></u></b> node <b><u><font color="#000000">in</font></u></b> r0 r1 r2; <b><u><font color="#000000">do</font></u></b> +> ssh root@$node <font color="#808080">"cat <<'EOF' > /etc/rancher/k3s/registries.yaml</font> +<font color="#808080">mirrors:</font> +<font color="#808080"> "</font>registry.lan.buetow.org:<font color="#000000">30001</font><font color="#808080">":</font> +<font color="#808080"> endpoint:</font> +<font color="#808080"> - "</font>http://localhost:<font color="#000000">30001</font><font color="#808080">"</font> +<font color="#808080">EOF</font> +<font color="#808080">systemctl restart k3s"</font> +> <b><u><font color="#000000">done</font></u></b> +</pre> +<br /> +<span>Thanks to the relayd configuration earlier in the post, the external hostnames (<span class='inlinecode'>f3s.foo.zone</span>, etc.) can already reach NodePort <span class='inlinecode'>30001</span>, so publishing the registry later to the outside world is just a matter of wiring the DNS the same way as the ingress hosts. But by default, that's not enabled for now due to security reasons.</span><br /> +<br /> +<h3 style='display: inline' id='pushing-and-pulling-images'>Pushing and pulling images</h3><br /> +<br /> +<span>Tag any locally built image with one of the node IPs on port <span class='inlinecode'>30001</span>, then push it. I usually target whichever node is closest to me, but any of the three will do:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ docker tag my-app:latest r0.lan.buetow.org:<font color="#000000">30001</font>/my-app:latest +$ docker push r0.lan.buetow.org:<font color="#000000">30001</font>/my-app:latest +</pre> +<br /> +<span>Inside the cluster (or from other nodes), reference the image via the service name that Helm created:</span><br /> +<br /> +<pre> +image: docker-registry-service:5000/my-app:latest +</pre> +<br /> +<span>You can test the pull path straight away:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ kubectl run registry-test \ +> --image=docker-registry-service:<font color="#000000">5000</font>/my-app:latest \ +> --restart=Never -n <b><u><font color="#000000">test</font></u></b> --command -- sleep <font color="#000000">300</font> +</pre> +<br /> +<span>If the pod pulls successfully, the private registry is ready for use by the rest of the workloads. Note, that the commands above actually don't work, they are only for illustration purpose mentioned here.</span><br /> +<br /> +<h2 style='display: inline' id='example-anki-sync-server-from-the-private-registry'>Example: Anki Sync Server from the private registry</h2><br /> +<br /> +<span>One of the first workloads I migrated onto the k3s cluster after standing up the registry was my Anki sync server. The configuration repo ships everything in <span class='inlinecode'>examples/conf/f3s/anki-sync-server/</span>: a Docker build context plus a Helm chart that references the freshly built image.</span><br /> +<br /> +<h3 style='display: inline' id='build-and-push-the-image'>Build and push the image</h3><br /> +<br /> +<span>The Dockerfile lives under <span class='inlinecode'>docker-image/</span> and takes the Anki release to compile as an <span class='inlinecode'>ANKI_VERSION</span> build argument. The accompanying <span class='inlinecode'>Justfile</span> wraps the steps, but the raw commands look like this:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ cd conf/f3s/examples/conf/f3s/anki-sync-server/docker-image +$ docker build -t anki-sync-server:<font color="#000000">25.07</font>.5b --build-arg ANKI_VERSION=<font color="#000000">25.07</font>.<font color="#000000">5</font> . +$ docker tag anki-sync-server:<font color="#000000">25.07</font>.5b \ + r0.lan.buetow.org:<font color="#000000">30001</font>/anki-sync-server:<font color="#000000">25.07</font>.5b +$ docker push r0.lan.buetow.org:<font color="#000000">30001</font>/anki-sync-server:<font color="#000000">25.07</font>.5b +</pre> +<br /> +<span>Because every k3s node treats <span class='inlinecode'>registry.lan.buetow.org:30001</span> as an insecure mirror (see above), the push succeeds regardless of which node answers. If you prefer the shortcut, <span class='inlinecode'>just f3s</span> in that directory performs the same build/tag/push sequence.</span><br /> +<br /> +<h3 style='display: inline' id='create-the-anki-secret-and-storage-on-the-cluster'>Create the Anki secret and storage on the cluster</h3><br /> +<br /> +<span>The Helm chart expects the <span class='inlinecode'>services</span> namespace, a pre-created NFS directory, and a Kubernetes secret that holds the credentials the upstream container understands:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ ssh root@r0 <font color="#808080">"mkdir -p /data/nfs/k3svolumes/anki-sync-server/anki_data"</font> +$ kubectl create namespace services +$ kubectl create secret generic anki-sync-server-secret \ + --from-literal=SYNC_USER1=<font color="#808080">'paul:SECRETPASSWORD'</font> \ + -n services +</pre> +<br /> +<span>If the <span class='inlinecode'>services</span> namespace already exists, you can skip that line or let Kubernetes tell you the namespace is unchanged.</span><br /> +<br /> +<h3 style='display: inline' id='deploy-the-chart'>Deploy the chart</h3><br /> +<br /> +<span>With the prerequisites in place, install (or upgrade) the chart. It pins the container image to the tag we just pushed and mounts the NFS export via a <span class='inlinecode'>PersistentVolume/PersistentVolumeClaim</span> pair:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ cd ../helm-chart +$ helm upgrade --install anki-sync-server . -n services +</pre> +<br /> +<span>Helm provisions everything referenced in the templates:</span><br /> +<br /> +<pre> +containers: +- name: anki-sync-server image: registry.lan.buetow.org:30001/anki-sync-server:25.07.5b + volumeMounts: + - name: anki-data + mountPath: /anki_data +</pre> +<br /> +<span>Once the release comes up, verify that the pod pulled the freshly pushed image and that the ingress we configured earlier resolves through relayd just like the Apache example.</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ kubectl get pods -n services +$ kubectl get ingress anki-sync-server-ingress -n services +$ curl https://anki.f3s.foo.zone/health +</pre> +<br /> +<span>All of this runs solely on first-party images that now live in the private registry, proving the full flow from local bild to WireGuard-exposed service.</span><br /> +<br /> +<h2 style='display: inline' id='nfsv4-uid-mapping-for-postgres-backed-and-other-apps'>NFSv4 UID mapping for Postgres-backed (and other) apps</h2><br /> +<br /> +<span>NFSv4 only sees numeric user and group IDs, so the <span class='inlinecode'>postgres</span> account created inside the container must exist with the same UID/GID on the Kubernetes worker and on the FreeBSD NFS servers. Otherwise the pod starts with UID 999, the export sees it as an unknown anonymous user, and Postgres fails to initialise its data directory.</span><br /> +<br /> +<span>To verify things line up end-to-end I run <span class='inlinecode'>id</span> in the container and on the hosts:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>> ~ kubectl <b><u><font color="#000000">exec</font></u></b> -n services deploy/miniflux-postgres -- id postgres +uid=<font color="#000000">999</font>(postgres) gid=<font color="#000000">999</font>(postgres) groups=<font color="#000000">999</font>(postgres) + +[root@r0 ~]<i><font color="silver"># id postgres</font></i> +uid=<font color="#000000">999</font>(postgres) gid=<font color="#000000">999</font>(postgres) groups=<font color="#000000">999</font>(postgres) + +paul@f0:~ % doas id postgres +uid=<font color="#000000">999</font>(postgres) gid=<font color="#000000">99</font>(postgres) groups=<font color="#000000">999</font>(postgres) +</pre> +<br /> +<span>The Rocky Linux workers get their matching user with plain <span class='inlinecode'>useradd</span>/<span class='inlinecode'>groupadd</span> (repeat on <span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, and <span class='inlinecode'>r2</span>):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># groupadd --gid 999 postgres</font></i> +[root@r0 ~]<i><font color="silver"># useradd --uid 999 --gid 999 \</font></i> + --home-dir /var/lib/pgsql \ + --shell /sbin/nologin postgres +</pre> +<br /> +<span>FreeBSD uses <span class='inlinecode'>pw</span>, so on each NFS server (<span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, <span class='inlinecode'>f2</span>) I created the same account and disabled shell access:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas pw groupadd postgres -g <font color="#000000">999</font> +paul@f0:~ % doas pw useradd postgres -u <font color="#000000">999</font> -g postgres \ + -d /var/db/postgres -s /usr/sbin/nologin +</pre> +<br /> +<span>Once the UID/GID exist everywhere, the Miniflux chart in <span class='inlinecode'>examples/conf/f3s/miniflux</span> deploys cleanly. The chart provisions both the application and its bundled Postgres database, mounts the exported directory, and builds the DSN at runtime. The important bits live in <span class='inlinecode'>helm-chart/templates/persistent-volumes.yaml</span> and <span class='inlinecode'>deployment.yaml</span>:</span><br /> +<br /> +<pre> +# Persistent volume lives on the NFS export +hostPath: + path: /data/nfs/k3svolumes/miniflux/data + type: Directory +... +containers: +- name: miniflux-postgres + image: postgres:17 + volumeMounts: + - name: miniflux-postgres-data + mountPath: /var/lib/postgresql/data +</pre> +<br /> +<span>Follow the <span class='inlinecode'>README</span> beside the chart to create the secrets and the target directory:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>$ cd examples/conf/f3s/miniflux/helm-chart +$ mkdir -p /data/nfs/k3svolumes/miniflux/data +$ kubectl create secret generic miniflux-db-password \ + --from-literal=fluxdb_password=<font color="#808080">'YOUR_PASSWORD'</font> -n services +$ kubectl create secret generic miniflux-admin-password \ + --from-literal=admin_password=<font color="#808080">'YOUR_ADMIN_PASSWORD'</font> -n services +$ helm upgrade --install miniflux . -n services --create-namespace +</pre> +<br /> +<span>And to verify it's all up:</span><br /> +<br /> +<pre> +$ kubectl get all --namespace=services | grep mini +pod/miniflux-postgres-556444cb8d-xvv2p 1/1 Running 0 54d +pod/miniflux-server-85d7c64664-stmt9 1/1 Running 0 54d +service/miniflux ClusterIP 10.43.47.80 <none> 8080/TCP 54d +service/miniflux-postgres ClusterIP 10.43.139.50 <none> 5432/TCP 54d +deployment.apps/miniflux-postgres 1/1 1 1 54d +deployment.apps/miniflux-server 1/1 1 1 54d +replicaset.apps/miniflux-postgres-556444cb8d 1 1 1 54d +replicaset.apps/miniflux-server-85d7c64664 1 1 1 54d +</pre> +<br /> +<span>Or from the repository root I simply run:</span><br /> +<br /> +<h3 style='display: inline' id='helm-charts-currently-in-service'>Helm charts currently in service</h3><br /> +<br /> +<span>These are the charts that already live under <span class='inlinecode'>examples/conf/f3s</span> and run on the cluster today (and I'll keep adding more as new services graduate into production):</span><br /> +<br /> +<ul> +<li><span class='inlinecode'>anki-sync-server</span> — custom-built image served from the private registry, stores decks on <span class='inlinecode'>/data/nfs/k3svolumes/anki-sync-server/anki_data</span>, and authenticates through the <span class='inlinecode'>anki-sync-server-secret</span>.</li> +<li><span class='inlinecode'>audiobookshelf</span> — media streaming stack with three hostPath mounts (<span class='inlinecode'>config</span>, <span class='inlinecode'>audiobooks</span>, <span class='inlinecode'>podcasts</span>) so the library survives node rebuilds.</li> +<li><span class='inlinecode'>example-apache</span> — minimal HTTP service I use for smoke-testing ingress and relayd rules.</li> +<li><span class='inlinecode'>example-apache-volume-claim</span> — Apache plus PVC variant that exercises NFS-backed storage for walkthroughs like the one earlier in this post.</li> +<li><span class='inlinecode'>miniflux</span> — the Postgres-backed feed reader described above, wired for NFSv4 UID mapping and per-release secrets.</li> +<li><span class='inlinecode'>opodsync</span> — podsync deployment with its data directory under <span class='inlinecode'>/data/nfs/k3svolumes/opodsync/data</span>.</li> +<li><span class='inlinecode'>radicale</span> — CalDAV/CardDAV (and gpodder) backend with separate <span class='inlinecode'>collections</span> and <span class='inlinecode'>auth</span> volumes.</li> +<li><span class='inlinecode'>registry</span> — the plain-HTTP Docker registry exposed on NodePort 30001 and mirrored internally as <span class='inlinecode'>registry.lan.buetow.org:30001</span>.</li> +<li><span class='inlinecode'>syncthing</span> — two-volume setup for config and shared data, fronted by the <span class='inlinecode'>syncthing.f3s.foo.zone</span> ingress.</li> +<li><span class='inlinecode'>wallabag</span> — read-it-later service with persistent <span class='inlinecode'>data</span> and <span class='inlinecode'>images</span> directories on the NFS export.</li> +</ul><br /> +<span>I hope you enjoyed this walkthrough. In the next part of this series, I will likely tackle monitoring, backup, or observability. I haven't fully decided yet which topic to cover next, so stay tuned!</span><br /> +<br /> +<span>Other *BSD-related posts:</span><br /> +<br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments (You are currently reading this)</a><br /> +<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> +<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> +<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> +<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> +<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br /> +<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br /> +<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br /> +<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let's Encrypt with OpenBSD and Rex</a><br /> +<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br /> +<br /> +<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br /> +<br /> +<a class='textlink' href='../'>Back to the main site</a><br /> + </div> + </content> + </entry> + <entry> <title>Bash Golf Part 4</title> <link href="https://foo.zone/gemfeed/2025-09-14-bash-golf-part-4.html" /> <id>https://foo.zone/gemfeed/2025-09-14-bash-golf-part-4.html</id> @@ -1291,6 +2370,7 @@ content = "{CODE}" <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage (You are currently reading this)</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -2094,7 +3174,7 @@ ifconfig_re0_alias0=<font color="#808080">"inet vhid 1 pass testpass alias 192.1 <span>Next, update <span class='inlinecode'>/etc/hosts</span> on all nodes (<span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, <span class='inlinecode'>f2</span>, <span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, <span class='inlinecode'>r2</span>) to resolve the VIP hostname:</span><br /> <br /> <pre> -192.168.1.138 f3s-storage-ha f3s-storage-ha.lan f3s-storage-ha.lan.buetow.org +192.168.2.138 f3s-storage-ha f3s-storage-ha.wg0 f3s-storage-ha.wg0.wan.buetow.org </pre> <br /> <span>This allows clients to connect to <span class='inlinecode'>f3s-storage-ha</span> regardless of which physical server is currently the MASTER.</span><br /> @@ -2850,7 +3930,7 @@ http://www.gnu.org/software/src-highlite --> clientaddr=<font color="#000000">127.0</font>.<font color="#000000">0.1</font>,local_lock=none,addr=<font color="#000000">127.0</font>.<font color="#000000">0.1</font>) <i><font color="silver"># For persistent mount, add to /etc/fstab:</font></i> -<font color="#000000">127.0</font>.<font color="#000000">0.1</font>:/data/nfs/k3svolumes /data/nfs/k3svolumes nfs4 port=<font color="#000000">2323</font>,_netdev <font color="#000000">0</font> <font color="#000000">0</font> +<font color="#000000">127.0</font>.<font color="#000000">0.1</font>:/k3svolumes /data/nfs/k3svolumes nfs4 port=<font color="#000000">2323</font>,_netdev,soft,timeo=<font color="#000000">10</font>,retrans=<font color="#000000">2</font>,intr <font color="#000000">0</font> <font color="#000000">0</font> </pre> <br /> <span>Note: The mount uses localhost (<span class='inlinecode'>127.0.0.1</span>) because stunnel is listening locally and forwarding the encrypted traffic to the remote server.</span><br /> @@ -3128,10 +4208,13 @@ Jul <font color="#000000">06</font> <font color="#000000">10</font>:<font color= <span>Both technologies could run on top of our encrypted ZFS volumes, combining ZFS's data integrity and encryption features with distributed storage capabilities. This would be particularly interesting for workloads that need either S3-compatible APIs (MinIO) or transparent distributed POSIX storage (MooseFS). What about Ceph and GlusterFS? Unfortunately, there doesn't seem to be great native FreeBSD support for them. However, other alternatives also appear suitable for my use case.</span><br /> <br /> <br /> -<span>I'm looking forward to the next post in this series, where we will set up k3s (Kubernetes) on the Linux VMs.</span><br /> +<span>Read the next post of this series:</span><br /> +<br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage (You are currently reading this)</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> @@ -4194,6 +5277,7 @@ Jul <font color="#000000">06</font> <font color="#000000">10</font>:<font color= <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network (You are currently reading this)</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -5174,6 +6258,7 @@ peer: 2htXdNcxzpI2FdPDJy4T4VGtm1wpMEQu1AkQHjNY6F8= <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network (You are currently reading this)</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> @@ -5755,6 +6840,7 @@ __ejm\___/________dwb`---`______________________ <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -6331,6 +7417,7 @@ Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color= <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)</a><br /> @@ -7056,6 +8143,7 @@ This is perl, v5.<font color="#000000">8.8</font> built <b><u><font color="#0000 <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -7445,6 +8533,7 @@ Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd shutdown succeeded <br /> <span>Other BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> @@ -8048,7 +9137,7 @@ Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd shutdown succeeded </content> </entry> <entry> - <title>Deciding on the hardware</title> + <title>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</title> <link href="https://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html" /> <id>https://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html</id> <updated>2024-12-02T23:48:21+02:00</updated> @@ -8059,7 +9148,7 @@ Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd shutdown succeeded <summary>This is the second blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> - <span> f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</span><br /> + <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-2-hardware-and-base-installation'>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</h1><br /> <br /> <span class='quote'>Published at 2024-12-02T23:48:21+02:00</span><br /> <br /> @@ -8075,6 +9164,7 @@ Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd shutdown succeeded <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -8085,6 +9175,7 @@ Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd shutdown succeeded <h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> <br /> <ul> +<li><a href='#f3s-kubernetes-with-freebsd---part-2-hardware-and-base-installation'>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a></li> <li><a href='#deciding-on-the-hardware'>Deciding on the hardware</a></li> <li>⇢ <a href='#not-arm-but-intel-n100-'>Not ARM but Intel N100 </a></li> <li>⇢ <a href='#beelink-unboxing'>Beelink unboxing</a></li> @@ -8406,6 +9497,7 @@ dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font> <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> @@ -8452,6 +9544,7 @@ dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -8603,6 +9696,7 @@ dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font> <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> @@ -11105,6 +12199,7 @@ http://www.gnu.org/software/src-highlite --> <br /> <span>Other *BSD and KISS related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> @@ -11473,6 +12568,7 @@ $ doas reboot <i><font color="silver"># Just in case, reboot one more time</font <br /> <span>Other *BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br /> <a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br /> <a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> @@ -13074,366 +14170,4 @@ http://www.gnu.org/software/src-highlite --> </div> </content> </entry> - <entry> - <title>'Software Developmers Career Guide and Soft Skills' book notes</title> - <link href="https://foo.zone/gemfeed/2023-07-17-career-guide-and-soft-skills-book-notes.html" /> - <id>https://foo.zone/gemfeed/2023-07-17-career-guide-and-soft-skills-book-notes.html</id> - <updated>2023-07-17T04:56:20+03:00</updated> - <author> - <name>Paul Buetow aka snonux</name> - <email>paul@dev.buetow.org</email> - </author> - <summary>These notes are of two books by 'John Sommez' I found helpful. I also added some of my own keypoints to it. These notes are mainly for my own use, but you might find them helpful, too.</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1 style='display: inline' id='software-developmers-career-guide-and-soft-skills-book-notes'>"Software Developmers Career Guide and Soft Skills" book notes</h1><br /> -<br /> -<span class='quote'>Published at 2023-07-17T04:56:20+03:00</span><br /> -<br /> -<span>These notes are of two books by "John Sommez" I found helpful. I also added some of my own keypoints to it. These notes are mainly for my own use, but you might find them helpful, too.</span><br /> -<br /> -<pre> - ,.......... .........., - ,..,' '.' ',.., - ,' ,' : ', ', - ,' ,' : ', ', - ,' ,' : ', ', - ,' ,'............., : ,.............', ', -,' '............ '.' ............' ', - '''''''''''''''''';''';'''''''''''''''''' - ''' -</pre> -<br /> -<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> -<br /> -<ul> -<li><a href='#software-developmers-career-guide-and-soft-skills-book-notes'>"Software Developmers Career Guide and Soft Skills" book notes</a></li> -<li>⇢ <a href='#improve'>Improve</a></li> -<li>⇢ ⇢ <a href='#always-learn-new-things'>Always learn new things</a></li> -<li>⇢ ⇢ <a href='#set-goals'>Set goals</a></li> -<li>⇢ ⇢ <a href='#ratings'>Ratings</a></li> -<li>⇢ ⇢ <a href='#promotions'>Promotions</a></li> -<li>⇢ ⇢ <a href='#finish-things'>Finish things</a></li> -<li>⇢ <a href='#expand-the-empire'>Expand the empire</a></li> -<li>⇢ <a href='#be-pragmatic-and-also-manage-your-time'>Be pragmatic and also manage your time</a></li> -<li>⇢ ⇢ <a href='#the-quota-system'>The quota system</a></li> -<li>⇢ ⇢ <a href='#don-t-waste-time'>Don't waste time</a></li> -<li>⇢ ⇢ <a href='#habits'>Habits</a></li> -<li><a href='#work-life-balance'>Work-life balance</a></li> -<li>⇢ <a href='#mental-health'>Mental health</a></li> -<li>⇢ <a href='#physical-health'>Physical health</a></li> -<li>⇢ <a href='#no-drama'>No drama</a></li> -<li><a href='#personal-brand'>Personal brand</a></li> -<li>⇢ <a href='#market-yourself'>Market yourself</a></li> -<li>⇢ <a href='#networking'>Networking</a></li> -<li>⇢ <a href='#public-speaking'>Public speaking</a></li> -<li><a href='#new-job'>New job</a></li> -<li>⇢ <a href='#for-the-interview'>For the interview</a></li> -<li>⇢ <a href='#find-the-right-type-of-company'>Find the right type of company</a></li> -<li>⇢ <a href='#apply-for-the-new-job'>Apply for the new job</a></li> -<li>⇢ <a href='#negotiation'>Negotiation</a></li> -<li>⇢ <a href='#leaving-the-old-job'>Leaving the old job</a></li> -<li><a href='#other-things'>Other things</a></li> -<li>⇢ <a href='#testing'>Testing</a></li> -<li>⇢ <a href='#books-to-read'>Books to read</a></li> -</ul><br /> -<h2 style='display: inline' id='improve'>Improve</h2><br /> -<br /> -<h3 style='display: inline' id='always-learn-new-things'>Always learn new things</h3><br /> -<br /> -<span>When you learn something new, e.g. a programming language, first gather an overview, learn from multiple sources, play around and learn by doing and not consuming and form your own questions. Don't read too much upfront. A large amount of time is spent in learning technical skills which were never use. You want to have a practical set of skills you are actually using. You need to know 20 percent to get out 80 percent of the results.</span><br /> -<br /> -<ul> -<li>Learn a technology with a goal, e.g. implement a tool. Practice practise practice.</li> -<li>"I know X can do Y, I don't know exactly how, but I can look it up."</li> -<li>Read what experts are writing, for example follow blogs. Stay up to date and spent half an hour per day trading blogs and books.</li> -<li>Pick an open source application, read the code and try to understand it to get a feel of the syntax of the programming language.</li> -<li>Understand, that the standard library makes you a much better programmer.</li> -<li>Self learning is the top skill a programmer can have and is also useful in other aspects in your life.</li> -<li>Keep learning skills every day. Code every day. Don't be overconfident for job security. Read blogs, read books.</li> -<li>If you want to learn, then do it by exploring. Also teach what you learned (for example write a blog post or hold a presentation).</li> -</ul><br /> -<span>Fake it until you make it. But be honest about your abilities or lack of. There is however only time between now and until you make it. Refer to your abilities to learn.</span><br /> -<br /> -<span>Boot camps: The advantage of a boot camp is to pragmatically learn things fast. We almost always overestimate what we can do in a day. Especially during boot camps. Connect to others during the boot camps</span><br /> -<br /> -<h3 style='display: inline' id='set-goals'>Set goals</h3><br /> -<br /> -<span>Your own goals are important but the manager also looks at how the team performs and how someone can help the team perform better. Check whether you are on track with your goals every 2 weeks in order to avoid surprises for the annual review. Make concrete goals for next review. Track and document your progress. Invest in your education. Make your goals known. If you want something, then ask for it. Nobody but you knows what you want.</span><br /> -<br /> -<h3 style='display: inline' id='ratings'>Ratings</h3><br /> -<br /> -<span>That's a trap: If you have to rate yourself, that's a trap. That never works in an unbiased way. Rate yourself always the best way but rate your weakest part as high as possible minus one point. Rate yourself as good as you can otherwise. Nobody is putting for fun a gun on his own head. </span><br /> -<br /> -<ul> -<li>Don't do peer rating, it can fire back on you. What if the colleague becomes your new boss?</li> -<li>Cooperate rankings are unfortunately HR guidelines and politics and only mirror a little your actual performance.</li> -</ul><br /> -<h3 style='display: inline' id='promotions'>Promotions</h3><br /> -<br /> -<span>The most valuable employees are the ones who make themselves obsolete and automate all away. Keep a safety net of 3 to 6 months of finances. Safe at least 10 percent of your earnings. Also, if you make money it does not mean that you have to spent more money. Is a new car better than a used car which both can bring you from A to B? Liability vs assets.</span><br /> -<br /> -<ul> -<li>Raise or promotion, what's better? Promotion is better as money will follow anyway then.</li> -<li>Take projects no-one wants and make them shine. A promotion will follow.</li> -<li>A promotion is not going to come to you because you deserve it. You have to hunt and ask for it.</li> -<li>Track all kudos (e.g. ask for emails from your colleagues).</li> -<li>Big corporations HRs don't expect a figjit. That's why it's so important to keep track of your accomplishments and kudos'.</li> -<li>If you want a raise be specific how much and know to back your demands. Don't make a thread and no ultimatums.</li> -<li>Best way for a promotion is to switch jobs. You can even switch back with a better salary.</li> -</ul><br /> -<h3 style='display: inline' id='finish-things'>Finish things</h3><br /> -<br /> -<span>Hard work is necessary for accomplish results. However, work smarter not harder. Furthermore, working smart is not a substitute for working hard. Work both, hard and smart.</span><br /> -<br /> -<ul> -<li>Learn to finish things without motivation. Things will pay off when you stick to stuff and eventually motivation can also come back.</li> -<li>You will fail if you don't plan realistically. Set also a schedule and follow to it as of life depends on it.</li> -<li>Advances come only of you give more than asked. Consistency, commitment and knowing what you need to do is more key than hard work.</li> -<li>Any action is better than no action. If you get stuck you have gained nothing.</li> -<li>You need to know the unknowns. Identify as many unknown not known things as possible. </li> -</ul><br /> -<span>Hard vs fun: Both engage the brain (video games vs work). Some work is hard and other is easy. Hard work is boring. The harsh truth is you have to put in hard and boring work in order to accomplish and be successful. Work won't be always boring though, as joy will follow with mastery.</span><br /> -<br /> -<span>Defeat is finally give up. Failure is the road to success, embrace it. Failure does not define you but how you respond to it. Events don't make your unhappy, but how you react to events do.</span><br /> -<br /> -<h2 style='display: inline' id='expand-the-empire'>Expand the empire</h2><br /> -<br /> -<span>The larger your empire is, the larger your circle of influence is. The larger the circle of influence is, the more opportunities you have.</span><br /> -<br /> -<ul> -<li>Do the dirty work if you want to expand the empire. That's there the opportunities are.</li> -<li>SCRUM often fails due to the lack to commitment. The backlog just becomes a wish to get completed.</li> -<li>Apply work on your quality standards. Don't cross the line of compromise. Always improve your skills. Never be happy being good enough.</li> -</ul><br /> -<span>Become visible, keep track that you accomplishments. E.g. write a weekly summary. Do presentations, be seen. Learn new things and share your learnings. Be the problem solver and not the blamer.</span><br /> -<br /> -<h2 style='display: inline' id='be-pragmatic-and-also-manage-your-time'>Be pragmatic and also manage your time</h2><br /> -<br /> -<span>Make use of time boxing via the Pomodoro technique: Set a target of rounds and track the rounds. That give you exact focused work time. That's really the trick. For example set a goal of 6 daily pomodores.</span><br /> -<br /> -<ul> -<li>Every time you do something question why does it make sense be pragmatic and don't follow because it is best practice.</li> -<li>You can also apply the time boxing technique (Cal Newport) for focused deep work.</li> -</ul><br /> -<span>You should feel good of the work done even if you don't finished the task. You will feel good about pomodoro wise even you don't finish the task on hand yet. Helps you to enjoy time off more. Working longer may not sell anything.</span><br /> -<br /> -<h3 style='display: inline' id='the-quota-system'>The quota system</h3><br /> -<br /> -<span>Defined quota of things done. E.g. N runs per week or M Blog posts per month or O pomodoros per week. This helps with consistency. Truly commit to these quotas. Failure is not an option. Start with small commitments. Don't commit to something you can't fulfill otherwise you set yourself up for failure.</span><br /> -<br /> -<ul> -<li>Why does the quota System work? Slow and consistent pace is the key. It also overcomes willpower weaknesses as goals are preset.</li> -<li>Internal motivation is more important over external motivation. Check out Daniels book drive.</li> -<li>Multitasking: Batching is effective. E.g. emails twice daily at pre-set times..</li> -</ul><br /> -<h3 style='display: inline' id='don-t-waste-time'>Don't waste time</h3><br /> -<br /> -<span>The biggest time waster is TV watching. The TV is programming you. It's insane that Americans watch so much TV as they work full time. Schedule one show at a time and watch it when you want to watch it. Most movies are crap anyways. The good movies will come to you as people will talk about them.</span><br /> -<br /> -<ul> -<li>Social media is time waster as well. Schedule your Social Media times. For example be on Facebook only for max one hour on Saturdays.</li> -<li>Meetings can waste time as well. Simply don't go to them. Try to cancel meeting if it can be dealt with via email.</li> -<li>Enjoying things is not a waste of time. E.g. you could still play a game once in a while. It is important not to cut away all you enjoy from your life.</li> -</ul><br /> -<h3 style='display: inline' id='habits'>Habits</h3><br /> -<br /> -<span>Try to have as many good habits as possible. Start with easy habits, and make them a little bit more challenging over time. Set ankers and rewards. Over time the routines will become habits naturally.</span><br /> -<br /> -<span>Habit stacking is effective, which is combining multiple habits at the same time. For example you can workout on a circular trainer while while watching a learning video on O'Reilly Safari Online while getting closer to your weekly step goal.</span><br /> -<br /> -<ul> -<li>We don't have control over our habits but our own routines.</li> -<li>Routines help to form the habits, though.</li> -</ul><br /> -<h1 style='display: inline' id='work-life-balance'>Work-life balance</h1><br /> -<br /> -<span>Avoid overwork hours. That's not as beneficial as you might think and comes only with very small rewards. Invest rather in yourself and not in your employer.</span><br /> -<br /> -<ul> -<li>Work-life balance is a myth. Make it so that you enjoy work and your personal life and not just personal life.</li> -<li>Maintain fewer but good relationships. As a reward, better and integrated your life will be.</li> -<li>Life in the present Moment. Make the best of every moment of your life.</li> -<li>Enjoy every aspect of your life. If you want to take away one thing from this book that is it.</li> -</ul><br /> -<span>Use your most productive hours to work on you. Make that your priority. Take care of yourself a priority (E.g. do workouts or learn a new language). You can always workout 2 or 1 hour per day, but will you pay the price?</span><br /> -<br /> -<h2 style='display: inline' id='mental-health'>Mental health</h2><br /> -<br /> -<ul> -<li>Friendships and positive thinking help to have and maintain better health, longer Life, better productivity and increased happiness.</li> -<li>Positive thinking can be trained and be a habit. Read the book "The Power of Positive Thinking".</li> -<li>Stoicism helps. Meditation helps. Playing for fun helps too.</li> -</ul><br /> -<span>Become the person you want to become (your self image). Program your brain unconsciously. Don't become the person other people want you to be. Embrace yourself, you are you.</span><br /> -<br /> -<span>In most cases burnout is just an illusion. If you don't have motivation push through the wall. People usually don't pass the wall as they feel they are burned out. After pushing through the wall you will have the most fun, for example you will be able playing the guitar greatly.</span><br /> -<br /> -<h2 style='display: inline' id='physical-health'>Physical health</h2><br /> -<br /> -<span>Utilise a standing desk and treadmill (you could walk and type at the same time). Increase the incline in order to burn more calories. Even on the standing desk you burn more calories than sitting. When you use pomodoro then you can use the small breaks for push-ups (maybe won't do as good when you are in a fasted state).</span><br /> -<br /> -<ul> -<li>You can only do one thing, lose fat or gain muscles. Not both at the same time.</li> -<li>Train your strength by heavy lifting, but only with a very few repetitions (e.g. 5 max for each exercise, everything over this is body building).</li> -<li>If you want to increase the muscle mass use medium weights but lift them more often. If you want to increase your endurance lift light weights but with even more reps.</li> -<li>Avoid highly processed foods</li> -</ul><br /> -<span>Intermittent fasting is an effective method to maintain weight and health. But it does not mean that you can only eat junk food in the feeding windows. Also, diet and nutrition is the most important for health and fitness. They make it also easier to stay focused and positive.</span><br /> -<br /> -<h2 style='display: inline' id='no-drama'>No drama</h2><br /> -<br /> -<span>Avoid drama at work. Where are humans there is drama. You can decide where to spent your energy in. But don't avoid conflict. Conflict is healthy in any kind of relationship. Be tactful and state your opinion. The goal is to find the best solution to the problem.</span><br /> -<br /> -<span>Don't worry about other people what they do and don't do. You only worry about you. Shut up and get your own things done. But you could help to inspire a not working colleague.</span><br /> -<br /> -<ul> -<li>During an argument, take the opponent's position and see how your opinion changes.</li> -<li>If you they to convince someone else it's an argument. Of you try to find the best solution it is a good resolution.</li> -<li>If someone is hurting the team let the manager know but phrase it nicely.</li> -<li>How to get rid of a never ending talking person? Set up focus hours officially where you don't want to be interrupted. Present as if it is your defect that you get interrupted easily.</li> -<li>TOXIC PEOPLE: AVOID THEM. RUN.</li> -<li>Boss likes if you get shit done without getting asked all the time about things and also without drama.</li> -</ul><br /> -<span>You have to learn how to work in a team. Be honest but tactful. It's not too be the loudest but about selling your ideas. Don't argue otherwise you won't sell anything. Be persuasive by finding the common ground. Or lead the colleagues to your idea and don't sell it upfront. Communicate clearly.</span><br /> -<br /> -<h1 style='display: inline' id='personal-brand'>Personal brand</h1><br /> -<br /> -<ul> -<li>Invest your value outside the company. Build your personal brand. Show how valuable you are, also to other companies. Become an asset.</li> -<li>Invest in your education. Make your goals known. If you want something ask for it (see also the sections about goals in this document).</li> -</ul><br /> -<h2 style='display: inline' id='market-yourself'>Market yourself</h2><br /> -<br /> -<ul> -<li>The best way to market yourself is to make you usable.</li> -<li>Create a brand. Decide your focus. Throw your name out as often as possible.</li> -</ul><br /> -<span>Have a blog. Schedule your posts. Consistency beats every other factor. E.g. post once a month a new post. Find your voice, you don't have to sound academic. Keep writing, if you keep it long enough the rewards will be coming. Your own blog can take 5 years to take off. Most people give up too soon.</span><br /> -<br /> -<ul> -<li>Consistency of your blog is key. Also write quality content. Don't try to be a man of success but try to be a man of value.</li> -<li>Have an elevator pitch: "buetow.org - Having fun with computers!"</li> -<li>Have social media accounts, especially the ones which are more tech related.</li> -</ul><br /> -<h2 style='display: inline' id='networking'>Networking</h2><br /> -<br /> -<span>Ask people so they talk about themselves. They are not really interested in you. Use meetup.com to find groups you are interested and build up the network over time. Don't drink on social networking events even when others do. Talking to other people at events only has upsides. Just saying "hi" and introducing yourself is enough. What worse can happen? If the person rejects you so what, life goes on. Ask open questions and no "yes" and "no" questions. E.g.: "What is your story, why are you here?".</span><br /> -<br /> -<h2 style='display: inline' id='public-speaking'>Public speaking</h2><br /> -<br /> -<span>Before your talk go on stage 10 minutes in advance. Introduce yourself to the front row people. During the talk they will smile at you and encourage you during your talk.</span><br /> -<br /> -<ul> -<li>Try at least 5 times before giving up public speaking. You can also start small, e.g. present a topic at work you are learning.</li> -<li>Practise your talk and timing. You can also record your practicing.</li> -</ul><br /> -<span>Just do it. Just go to conferences. Even if you are not speaking. Sell your boss what you would learn and "this and that" and you would present the learnings to the team afterwards.</span><br /> -<br /> -<h1 style='display: inline' id='new-job'>New job</h1><br /> -<br /> -<h2 style='display: inline' id='for-the-interview'>For the interview</h2><br /> -<br /> -<ul> -<li>Build up a network before the interview. E.g., follow and comment blogs. Or go to meet-ups and conferences. Join user groups.</li> -<li>Ask to touch base before the real interview and ask questions about the company. Do "pre-interviews".</li> -<li>Have a blog, a CV can only be 2 pages and an interview only can last only 2 hours. A blog helps you also to be a better communicator.</li> -</ul><br /> -<span>If you are specialized then there is a better chance to get a fitting job. No one will hire a general lawyer if there are specialized lawyers available. Even if you are specialized, you will have a wide range of skills (T-shape knowledge).</span><br /> -<br /> -<h2 style='display: inline' id='find-the-right-type-of-company'>Find the right type of company</h2><br /> -<br /> -<span>Not all companies are equal. They have individual cultures and guidelines.</span><br /> -<br /> -<ul> -<li>Startup: dynamic and larger impact. Many hats on.</li> -<li>Medium size companies: most stable ones. Not cutting edge technologies. No crazy working hours.</li> -<li>Large company: very established with a lot of structure however constant layoffs and restructurings. Less impact you can have. Complex politics.</li> -<li>Working for yourself: This is harder than you think, probably much harder.</li> -</ul><br /> -<span>Work in a tech. company if you want to work on/with cutting edge technologies.</span><br /> -<br /> -<h2 style='display: inline' id='apply-for-the-new-job'>Apply for the new job</h2><br /> -<br /> -<span>Get a professional resume writer. Get referrals of writers and get samples from there. Get sufficient with algorithm and data structures interview questions. Cracking the coding interview book and blog </span><br /> -<br /> -<ul> -<li>Apply for each job with a specialised CV each. Each CV fits the job better.</li> -<li>Best get a job via a personal referral or inbound marketing. The latter is somehow rare.</li> -<li>Inbound marketing is for example someone responds to your blog and offers you a job.</li> -<li>Interview the interviewer. Be persistent.</li> -<li>Create creative looking resumes, see simple programmer website. Action-result style for a resume.</li> -</ul><br /> -<span>Invest in your dress code as appearance masters. It does make sense to invest in your style. You could even hire a professional stylist (not my personal way though).</span><br /> -<br /> -<h2 style='display: inline' id='negotiation'>Negotiation</h2><br /> -<br /> -<ul> -<li>Whoever names the number first loses. You don't know what someone else is expecting unless told. Low ball number may be an issue but you have to know the market.</li> -<li>Salary is not about what you need but what you are worth. Try to find out what you are worth.</li> -<li>Big tech companies have a pay scale. You can ask for this.</li> -<li>Don't tell your current salary. Only do one counter offer and say "If you do X then I commit today". Be tactful and not rude. Nobody wants to be taken advantage of. Also, don't be arrogant.</li> -<li>If the company wants to know your range, respond: "I would rather learn more about the job and compensation. You have a range in mind, correct?" Be brave and just pause here.</li> -<li>Otherwise, if the company refuses then say "if you tell me what the range is and although I am not yet sure yet what are my exact salary requirements are I can see if the range is of what I am looking for. If they absolute refuse give a high ball range you would expect and make it conditional to the overall compensation package. E.g. 70k to 100k depending on the compensation package. THE LOW END SHOULD BE YOUR REAL LOW END. Play a little bit of hardball here and be brave. Practise it.</li> -<li>Put 10 percent on top of the salary range into a counter offer.</li> -<li>Everything is negotiable, not only the salary.</li> -<li>Job markup rate: Check it regarding the recruitment rate negotiation.</li> -<li>Don't make a rushed decision based on deadlines. Make a fairly high counter offer shortly before deadline.</li> -<li>You should also cope with rejections while selling yourself. There is no such thing as job security.</li> -</ul><br /> -<ul> -<li>Never spilt the difference is the best book for learning negotiation techniques..</li> -</ul><br /> -<h2 style='display: inline' id='leaving-the-old-job'>Leaving the old job</h2><br /> -<br /> -<span>When leaving a job make a clean and non personal as possible. Never complain and never explain. Don't worry about abandonment of the team. Everybody is replacement and you make a business decision. Don't threaten to quit as you are replaceable.</span><br /> -<br /> -<h1 style='display: inline' id='other-things'>Other things</h1><br /> -<br /> -<ul> -<li>As a leader lead by example and don't lead from the Eiffel tower.</li> -<li>As a leader you are responsible for the team. If the team fails then it's your fault only.</li> -</ul><br /> -<h2 style='display: inline' id='testing'>Testing</h2><br /> -<br /> -<span>Unit testing Vs regression testing: Unit tests test the smallest possible unit and get rewritten if the unit gets changed. It's like programming against a specification n. Regression tests test whether the software still works after the change. Now you know more than most software engineers.</span><br /> -<br /> -<h2 style='display: inline' id='books-to-read'>Books to read</h2><br /> -<br /> -<ul> -<li>Clean Code</li> -<li>Code Complete</li> -<li>Cracking the Interview - Lessons and Solutions.</li> -<li>Daniels Book "Drive" (about internal and external motivation)</li> -<li>God's degree (inventor of Dilbert)</li> -<li>Head first Design Patterns</li> -<li>How to win Friends and influence People</li> -<li>Never Split the Difference [X]</li> -<li>Structure and programming functional programs</li> -<li>The obstacle is the way [X]</li> -<li>The passionate programmer</li> -<li>The Power of Positive Thinking (Highly religious - I personally don't like it)</li> -<li>The Pragmatic Programmer [X]</li> -<li>The war of Art (to combat procrastination)</li> -<li>Willpower Instinct</li> -</ul><br /> -<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br /> -<br /> -<span>Other book notes of mine are:</span><br /> -<br /> -<a class='textlink' href='./2025-06-07-a-monks-guide-to-happiness-book-notes.html'>2025-06-07 "A Monk's Guide to Happiness" book notes</a><br /> -<a class='textlink' href='./2025-04-19-when-book-notes.html'>2025-04-19 "When: The Scientific Secrets of Perfect Timing" book notes</a><br /> -<a class='textlink' href='./2024-10-24-staff-engineer-book-notes.html'>2024-10-24 "Staff Engineer" book notes</a><br /> -<a class='textlink' href='./2024-07-07-the-stoic-challenge-book-notes.html'>2024-07-07 "The Stoic Challenge" book notes</a><br /> -<a class='textlink' href='./2024-05-01-slow-productivity-book-notes.html'>2024-05-01 "Slow Productivity" book notes</a><br /> -<a class='textlink' href='./2023-11-11-mind-management-book-notes.html'>2023-11-11 "Mind Management" book notes</a><br /> -<a class='textlink' href='./2023-07-17-career-guide-and-soft-skills-book-notes.html'>2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes (You are currently reading this)</a><br /> -<a class='textlink' href='./2023-05-06-the-obstacle-is-the-way-book-notes.html'>2023-05-06 "The Obstacle is the Way" book notes</a><br /> -<a class='textlink' href='./2023-04-01-never-split-the-difference-book-notes.html'>2023-04-01 "Never split the difference" book notes</a><br /> -<a class='textlink' href='./2023-03-16-the-pragmatic-programmer-book-notes.html'>2023-03-16 "The Pragmatic Programmer" book notes</a><br /> -<br /> -<a class='textlink' href='../'>Back to the main site</a><br /> - </div> - </content> - </entry> </feed> diff --git a/gemfeed/examples/conf/README.md b/gemfeed/examples/conf/README.md new file mode 100644 index 00000000..b0f5d08a --- /dev/null +++ b/gemfeed/examples/conf/README.md @@ -0,0 +1,9 @@ +conf +==== + +My personal config repositories. Including + +* rexfiles +* k8s/helm manifests +* some docker files +* RCM files (soon?) diff --git a/gemfeed/examples/conf/Rexfile b/gemfeed/examples/conf/Rexfile new file mode 100644 index 00000000..74260007 --- /dev/null +++ b/gemfeed/examples/conf/Rexfile @@ -0,0 +1,3 @@ +require for <'*/Rexfile'>; + +# vim: syntax=perl diff --git a/gemfeed/examples/conf/babylon5/README.md b/gemfeed/examples/conf/babylon5/README.md new file mode 100644 index 00000000..58a0a47e --- /dev/null +++ b/gemfeed/examples/conf/babylon5/README.md @@ -0,0 +1,3 @@ +# Babylon5 + +Some backup of some Docker start scripts of my `babylon5.buetow.org` server, which I deleted as I moved off all containers to AWS ECS Fargate/Terraform https://codeberg.org/snonux/terraform ! diff --git a/gemfeed/examples/conf/babylon5/backup-start b/gemfeed/examples/conf/babylon5/backup-start new file mode 100755 index 00000000..c616ba09 --- /dev/null +++ b/gemfeed/examples/conf/babylon5/backup-start @@ -0,0 +1,64 @@ +#!/usr/bin/bash + +set -euf -o pipefail +declare -r DATE=$(date +%d) + +ensure_directory () { + local -r dir="$1"; shift + + if [ ! -d "$dir" ]; then + mkdir "$dir" + chmod 700 "$dir" + fi +} + +get_docker_id () { + local -r image="$1"; shift + docker ps | awk -v image="$image" '$2 == image { print $1 }' +} + +backup_wallabag () { + ensure_directory /opt/backup/wallabag + local -r container="$(get_docker_id 'wallabag/wallabag')" + docker stop "$container" + tar -hcvpf /opt/backup/wallabag/wallabag.tar.gz.tmp /opt/wallabag && + mv /opt/backup/wallabag/wallabag.tar.gz.tmp /opt/backup/wallabag/wallabag-$DATE.tar.gz && + touch /opt/backup/wallabag.lastrun + docker start "$container" +} + +backup_vaultwarden () { + ensure_directory /opt/backup/vaultwarden + local -r container="$(get_docker_id 'vaultwarden/server:latest')" + docker stop "$container" + tar -hcvpf /opt/backup/vaultwarden/vaultwarden.tar.gz.tmp /opt/vaultwarden && + mv /opt/backup/vaultwarden/vaultwarden.tar.gz.tmp /opt/backup/vaultwarden/vaultwarden-$DATE.tar.gz && + touch /opt/backup/vaultwarden.lastrun + docker start "$container" +} + +backup_anki () { + ensure_directory /opt/backup/anki-sync-server + local -r container="$(get_docker_id 'anki-sync-server:latest')" + docker stop "$container" + tar -hcvpf /opt/backup/anki-sync-server/anki-sync-server.tar.gz.tmp /opt/anki-sync-server && + mv /opt/backup/anki-sync-server/anki-sync-server.tar.gz.tmp \ + /opt/backup/anki-sync-server/anki-sync-server-$DATE.tar.gz && + touch /opt/backup/anki-sync-server.lastrun + docker start "$container" +} + +backup_audiobookshelf_meta () { + ensure_directory /opt/backup/audiobookshelf + rsync -avz -delete /opt/audiobookshelf/metadata/backups/ /opt/backup/audiobookshelf +} + +backup_wallabag +backup_vaultwarden +backup_anki +backup_audiobookshelf_meta + +chgrp -R backup /opt/backup/ +find -L /opt/backup -mindepth 2 -type f -exec chmod 640 "{}" \; +find -L /opt/backup -mindepth 2 -type d -exec chmod 750 "{}" \; +chmod 755 /opt/backup/nextcloud/borg diff --git a/gemfeed/examples/conf/babylon5/docker-start-anki-sync-server b/gemfeed/examples/conf/babylon5/docker-start-anki-sync-server new file mode 100755 index 00000000..a6b3930a --- /dev/null +++ b/gemfeed/examples/conf/babylon5/docker-start-anki-sync-server @@ -0,0 +1,4 @@ +#!/usr/bin/bash + +set -x +docker run -d --name anki-sync-server --user nobody --restart always -v /opt/anki-sync-server/data:/data -p 83:27701 anki-sync-server:latest diff --git a/gemfeed/examples/conf/babylon5/docker-start-audiobookshelf b/gemfeed/examples/conf/babylon5/docker-start-audiobookshelf new file mode 100755 index 00000000..404c787c --- /dev/null +++ b/gemfeed/examples/conf/babylon5/docker-start-audiobookshelf @@ -0,0 +1,12 @@ +#!/usr/bin/bash + +set -x + +docker pull ghcr.io/advplyr/audiobookshelf +docker run -d \ + -p 13378:80 \ + -v /opt/audiobookshelf/config:/config \ + -v /opt/audiobookshelf/metadata:/metadata \ + -v /opt/audiobookshelf/audiobooks:/audiobooks \ + -v /opt/audiobookshelf/podcasts:/podcasts \ + --name audiobookshelf ghcr.io/advplyr/audiobookshelf diff --git a/gemfeed/examples/conf/babylon5/docker-start-nextcloud-aio b/gemfeed/examples/conf/babylon5/docker-start-nextcloud-aio new file mode 100755 index 00000000..0a66afb7 --- /dev/null +++ b/gemfeed/examples/conf/babylon5/docker-start-nextcloud-aio @@ -0,0 +1,15 @@ +#!/usr/bin/bash + +set -x + +sudo docker run \ + --sig-proxy=false \ + --name nextcloud-aio-mastercontainer \ + --restart always \ + --publish 8080:8080 \ + -e APACHE_PORT=82 \ + -e APACHE_IP_BINDING=0.0.0.0 \ + -e NEXTCLOUD_DATADIR=/opt/nextcloud/ncdata \ + --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \ + --volume /var/run/docker.sock:/var/run/docker.sock:ro \ + nextcloud/all-in-one:latest diff --git a/gemfeed/examples/conf/babylon5/docker-start-vaultwarden b/gemfeed/examples/conf/babylon5/docker-start-vaultwarden new file mode 100755 index 00000000..15e1f93a --- /dev/null +++ b/gemfeed/examples/conf/babylon5/docker-start-vaultwarden @@ -0,0 +1,10 @@ +#!/usr/bin/bash + +set -x + +# docker pull vaultwarden/server:latest +docker run -d \ + --restart always \ + --name vaultwarden \ + --volume /opt/vaultwarden/data/:/data/ \ + --publish 90:80 vaultwarden/server:latest diff --git a/gemfeed/examples/conf/babylon5/docker-start-wallabag b/gemfeed/examples/conf/babylon5/docker-start-wallabag new file mode 100755 index 00000000..e0656d55 --- /dev/null +++ b/gemfeed/examples/conf/babylon5/docker-start-wallabag @@ -0,0 +1,4 @@ +#!/usr/bin/bash + +set -x +docker run -d --restart always -v /opt/wallabag/data:/var/www/wallabag/data -v /opt/wallabag/images:/var/www/wallabag/web/assets/images -p 81:80 -e "SYMFONY__ENV__DOMAIN_NAME=https://bag.buetow.org" wallabag/wallabag diff --git a/gemfeed/examples/conf/dotfiles/README.md b/gemfeed/examples/conf/dotfiles/README.md new file mode 100644 index 00000000..6fdd2c25 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/README.md @@ -0,0 +1,5 @@ +# dotfiles + +These are all my dotfiles. I can install them locally on my laptop and/or workstation as well as remotely on any server. + +For local installation, also have a read through https://blog.ferki.it/2023/08/11/local-management-with-rex/ diff --git a/gemfeed/examples/conf/dotfiles/Rexfile b/gemfeed/examples/conf/dotfiles/Rexfile new file mode 100644 index 00000000..e0e002e5 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/Rexfile @@ -0,0 +1,225 @@ +use Rex -feature => [ '1.14', 'exec_autodie' ]; +use Rex::Logger; + +our $HOME = $ENV{HOME}; + +# In a public Git rapository. +our $DOT = "$HOME/git/conf/dotfiles"; + +# In a private Git repository. +our $DOT_PRIVATE = "$HOME/git/conf_private/dotfiles"; + +sub ensure_dir { + my ( $src_glob, $dst_dir, $file_mode ) = @_; + Rex::Logger::info("Ensure dir glob $src_glob"); + + file $dst_dir, + ensure => 'directory', + mode => '0700'; + + file "$dst_dir/" . basename($_), + ensure => 'present', + source => $_, + mode => $file_mode // '0640' + for glob $src_glob; +} + +sub ensure_file { + my ( $src_file, $dst_file, $file_mode ) = @_; + + file $dst_file, + ensure => 'present', + source => $src_file, + mode => $file_mode // '0640'; +} + +sub ensure { + my ( $src, $dst, $mode ) = @_; + ( $dst =~ /\/$/ ? \&ensure_dir : \&ensure_file )->( $src, $dst, $mode ); +} + +desc 'Install packages on Termux'; +task 'pkg_termux', sub { + my @pkgs = qw/ + ack-grep + ctags + fzf + golang + htop + make + nodejs + ripgrep + rsync + ruby + starship + tig + /; + + for my $pkg (@pkgs) { + Rex::Logger::info("Installing package $pkg"); + pkg $pkg, ensure => 'installed'; + } +}; + +desc 'Install packages on FreeBSD'; +task 'pkg_freebsd', sub { + my @pkgs = qw/ + bat + ctags + fzf + gmake + go + gron + htop + lynx + node + p5-ack + ripgrep + starship + tig + tmux + /; + + for my $pkg (@pkgs) { + Rex::Logger::info("Installing package $pkg"); + pkg $pkg, ensure => 'installed'; + } +}; + +desc 'Install packages on Fedora Linux'; +task 'pkg_fedora', sub { + my @pkgs = qw/ + opendoas + fd-find + nodejs-bash-language-server + fortune-mod + syncthing + ncdu + ack + fish + bat + ctags + fzf + golang + golang-x-tools-gopls + gpaste + gron + htop + java-latest-openjdk-devel + lynx + make + nodejs + perl-File-Slurp + procs + rakudo + Rex + ripgrep + ruby + strace + task2 + tig + tmux + dialect + chromium + strawberry + gnumeric + sway-config-fedora + sway + waybar + zathura + /; + + for my $pkg (@pkgs) { + Rex::Logger::info("Installing package $pkg"); + pkg $pkg, ensure => 'installed'; + } +}; + +desc 'Install ~/.config/helix'; +task 'home_helix', sub { ensure "$DOT/helix/*" => "$HOME/.config/helix/" }; + +desc 'Install ~/.config/ghostty'; +task 'home_ghostty', sub { ensure "$DOT/ghostty/*" => "$HOME/.config/ghostty/" }; + +desc 'Install ~/scripts'; +task 'home_scripts', sub { ensure "$DOT/scripts/*" => "$HOME/scripts/", '0750' }; + +desc 'Install ~/.ssh files'; +task 'home_ssh', sub { ensure "$DOT/ssh/config" => "$HOME/.ssh/config", '0600' }; + +desc 'Install BASH configuration'; +task 'home_bash', sub { + ensure "$DOT/bash/bash_profile" => "$HOME/.bash_profile"; + ensure "$DOT/bash/bashrc" => "$HOME/.bashrc"; +}; + +desc 'Install fish configuration'; +task 'home_fish', sub { + + # ensure "$DOT/fish/conf.d/*" => "$HOME/.config/fish/conf.d/"; + my $dest_dir = "$HOME/.config/fish/conf.d"; + if ( !-l $dest_dir ) { + if ( -d $dest_dir ) { + rename $dest_dir, "$dest_dir.old" or die "Could not rename $dest_dir: $!"; + } + symlink "$DOT/fish/conf.d" => $dest_dir or die "Could not create symlink: $!"; + } +}; + +desc 'Install gitsyncer configuration'; +task 'home_gitsyncer', sub { + my $dest_dir = "$HOME/.config/gitsyncer"; + symlink "$DOT/gitsyncer/" => $dest_dir or die "Could not create symlink: $!"; +}; + +sub isFileSymlink() { + my $file = shift; + return -l $file && -e $file; +} + +desc 'Vale and proselint'; +task 'home_vale', sub { + ensure "$DOT/vale.ini" => "$HOME/.vale.ini"; + say 'Now you can run "vale sync"'; +}; + +desc 'Install tmux configuration'; +task 'home_tmux', sub { + ensure "$DOT/tmux/*" => "$HOME/.config/tmux/"; +}; + +desc 'Install Sway configuration'; +task 'home_sway', sub { + ensure "$DOT/sway/config.d/*" => "$HOME/.config/sway/config.d/"; + ensure "$DOT/waybar/*" => "$HOME/.config/waybar/"; +}; + +desc 'Install my signature'; +task 'home_signature', sub { + ensure "$DOT/signature" => "$HOME/.signature"; +}; + +desc 'Install my calendar files'; +task 'home_calendar', sub { + unless ( -d $DOT_PRIVATE ) { + Rex::Logger::info( "$DOT_PRIVATE not there, skipping task", 'warn' ); + } + else { + ensure "$DOT_PRIVATE/calendar/*" => "$HOME/.calendar/"; + } +}; + +desc 'Install my Pipewire tuned for High-Res config'; +task 'home_pipewire', sub { + file "$HOME/.config/pipewire" => ensure => 'directory', + mode => '0750'; + ensure + "$DOT/pipewire/pipewire.conf" => "$HOME/.config/pipewire/pipewire.conf", + '0600'; +}; + +desc 'Install all my ~ files'; +task 'home', sub { + require Rex::TaskList; + run_task $_ for Rex::TaskList->create()->get_all_tasks('^home_'); +}; diff --git a/gemfeed/examples/conf/dotfiles/bash/bash_profile b/gemfeed/examples/conf/dotfiles/bash/bash_profile new file mode 100644 index 00000000..004a7b32 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/bash/bash_profile @@ -0,0 +1,3 @@ +if [ -f $HOME/.bashrc ]; then + source $HOME/.bashrc +fi diff --git a/gemfeed/examples/conf/dotfiles/bash/bashrc b/gemfeed/examples/conf/dotfiles/bash/bashrc new file mode 100644 index 00000000..ec2b10c3 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/bash/bashrc @@ -0,0 +1,15 @@ +# If shell is interactive +if [[ ! -z "$PS1" && ! -f $HOME/.nofish ]]; then + # Use fish if it's installed + if [ -e /opt/local/bin/fish ]; then + exec /opt/local/bin/fish + elif [ -e /bin/fish ]; then + exec /bin/fish + elif [ -e /usr/bin/fish ]; then + exec /usr/bin/fish + elif [ -e /data/data/com.termux/files/usr/bin/fish ]; then + exec /data/data/com.termux/files/usr/bin/fish + fi + + echo 'I might want to install fish on this host' +fi diff --git a/gemfeed/examples/conf/dotfiles/claude/CLAUDE.md b/gemfeed/examples/conf/dotfiles/claude/CLAUDE.md new file mode 100644 index 00000000..ffda0b71 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/claude/CLAUDE.md @@ -0,0 +1,2 @@ +- Whenever updating code, also update the comments in the code to reflect the reality and the reasoning. +- When a function reaches 50 lines of code or more, try to refactor it into several functions of about 30 lines each. In case of a go project, when main.go becomes too large, move code into the ./internal package. diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/ai.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/ai.fish new file mode 100644 index 00000000..23ce2b20 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/ai.fish @@ -0,0 +1,39 @@ +abbr -a gpt chatgpt +abbr -a gpti "chatgpt --interactive" +abbr -a suggest hexai +abbr -a explain 'hexai explain' +abbr -a aic 'aichat -e' + +# helix-gpt env vars used +# set -gx COPILOT_MODEL gpt-4.1 # can be changed with aimodels function +set -gx COPILOT_MODEL gpt-4o # can be changed with aimodels function +set -gx HANDLER copilot + +# TODO: also reconfigure aichat tool using this function +function aimodels + # nvim for the ai tool wrapper so i can use Copilot Chat from the command line. + set -l NVIM_DIR "$HOME/.config/nvim/" + set -l COPILOT_CHAT_DIR "$NVIM_DIR/pack/copilotchat/start/CopilotChat.nvim/lua/CopilotChat" + + printf "gpt-4o +gpt-5 +gpt-o3 +gpt-4.1 +claude-3.7-sonnet +claude-3.7-sonnet-thought +claude-4.0-sonnet +gemini-2.5-pro" >~/.aimodels + + set -gx COPILOT_MODEL (cat ~/.aimodels | fzf) + set -gx OPENAI_MODEL $COPILOT_MODEL + + if test -d $COPILOT_CHAT_DIR + set -l model_config "$COPILOT_CHAT_DIR/config-$COPILOT_MODEL.lua" + if test -f "$model_config" + echo "Using CopilotChat config from $model_config" + cp -v $model_config "$COPILOT_CHAT_DIR/config.lua" + else + echo "No config found at $model_config" + end + end +end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/alternatives.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/alternatives.fish new file mode 100644 index 00000000..491cf1fe --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/alternatives.fish @@ -0,0 +1,17 @@ +if type -q bat + alias Cat=/usr/bin/cat + alias cat=bat +end +if type -q see + alias ca=see +end +if type -q bit + alias Git=/usr/bin/git + alias git=bit +end +if type -q procs + alias p='procs' +end +if type -q carl + alias cal='carl' +end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/config.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/config.fish new file mode 100644 index 00000000..670ca861 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/config.fish @@ -0,0 +1,31 @@ +fish_vi_key_bindings + +# Add paths to PATH +set -U fish_user_paths ~/bin ~/scripts ~/go/bin ~/.cargo/bin $fish_user_paths + +if command -q -v doas >/dev/null + abbr -a s doas +else + abbr -a s sudo +end + +abbr -a g 'grep -E -i' +abbr -a no 'grep -E -i -v' +abbr -a not 'grep -E -i -v' +abbr -a gl 'git log --pretty=oneline --graph --decorate --all' +abbr -a gp 'begin; git commit -a; and git pull; and git push; end' + +for dir in ~/.config/fish/conf.d.work ~/.config/fish/conf.d.local + if test -d $dir + for file in $dir/*.fish + source $file + end + end +end + +if test -d /home/linuxbrew/.linuxbrew + if status is-interactive + # Commands to run in interactive sessions can go here + end + eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" +end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/dotfiles.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/dotfiles.fish new file mode 100644 index 00000000..6304d321 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/dotfiles.fish @@ -0,0 +1,48 @@ +set -gx DOTFILES_DIR ~/git/rexfiles/dotfiles + +function dotfiles::update + set -l prev_pwd (pwd) + cd $DOTFILES_DIR + rex home + cd "$prev_pwd" +end + +function dotfiles::update::git + set -l prev_pwd (pwd) + cd $DOTFILES_DIR + git pull + git commit -a + git push + rex home + cd "$prev_pwd" +end + +function dotfiles::fuzzy::edit + set -l prev_pwd (pwd) + cd $DOTFILES_DIR + set -l dotfile (find . -type f -not -path '*/.git/*' | fzf) + $EDITOR "$dotfile" + if echo "$dotfile" | grep -F -q .fish + echo "Sourcing $dotfile" + source "$dotfile" + end + cd "$prev_pwd" +end + +function dotfiles::rexify + cd $DOTFILES_DIR + rex home + cd - +end + +function dotfiles::random::edit + $EDITOR (find $DOTFILES_DIR -type f -not -path '*/.git/*' | shuf -n 1) +end + +abbr -a .u 'dotfiles::update' +abbr -a .ug 'dotfiles::update::git' +abbr -a .e 'dotfiles::fuzzy::edit' +abbr -a .rex 'dotfiles::rexify' +abbr -a .re 'dotfiles::random::edit' +abbr -a cdconf "cd $HOME/git/conf" +abbr -a cdotfiles "cd $HOME/git/conf/dotfiles" diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/editor.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/editor.fish new file mode 100644 index 00000000..bda46448 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/editor.fish @@ -0,0 +1,44 @@ +set -gx EDITOR hx +set -gx VISUAL $EDITOR +set -gx GIT_EDITOR $EDITOR +set -gx HELIX_CONFIG_DIR $HOME/.config/helix + +function editor::helix::open_with_lock + set -l file $argv[1] + set -l lock "$file.lock" + if test -f "$lock" + echo "File lock $lock exists! Another instance is editing it?" + return 2 + end + touch $lock + hx $file $argv[2..-1] + rm $lock +end + +function editor::helix::open_with_lock::force + set -l file $argv[1] + set -l lock "$file.lock" + if test -f "$lock" + echo "File lock $lock exists! Force deleting it and terminating all $EDITOR instances?" + rm -f $lock + pkill -f $EDITOR + end + touch $lock + hx $file $argv[2..-1] + rm $lock +end + +function editor::helix::edit::remote + set -l local_path $argv[1] + set -l remote_uri $argv[2] + scp $local_path $remote_uri; or return 1 + echo "LOCAL_PATH=$local_path; REMOTE_URI=$remote_uri" >~/.hx.remote.source + hx $local_path +end + +abbr -a lhx 'editor::helix::open_with_lock' +abbr -a hxl 'editor::helix::open_with_lock' +abbr -a hxlf 'editor::helix::open_with_lock::force' +abbr -a lhxf 'editor::helix::open_with_lock::force' +abbr -a rhx 'editor::helix::edit::remote' +abbr -a x hx diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/fuzzy.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/fuzzy.fish new file mode 100644 index 00000000..7683a0e7 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/fuzzy.fish @@ -0,0 +1,5 @@ +function __tv_git + tv git-repos +end + +bind \cg __tv_git diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/games.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/games.fish new file mode 100644 index 00000000..291a798f --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/games.fish @@ -0,0 +1,15 @@ +function games::colorscript + if test -e ~/git/shell-color-scripts + cd ~/git/shell-color-scripts + set -x DEV 1 + ./colorscript.sh --random + cd - + else + echo 'No colorscripts installed. Go to:' + echo ' https://gitlab.com/dwt1/shell-color-scripts' + end +end + +if not test -f ~/.colorscript.disable + games::colorscript +end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/gos.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/gos.fish new file mode 100644 index 00000000..a23d7a7b --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/gos.fish @@ -0,0 +1,6 @@ +set -x GOS_BIN ~/go/bin/gos +set -x GOS_DIR ~/.gosdir + +if test -f $GOS_BIN + alias cdgos "cd $GOS_DIR" +end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/k8s.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/k8s.fish new file mode 100644 index 00000000..ee1584bf --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/k8s.fish @@ -0,0 +1,76 @@ +function kcompletions + if command -q -v kubectl >/dev/null + kubectl completion fish | source + end +end + +# Check if the directory $HOME/.krew exists and update PATH +if test -d $HOME/.krew + set -x PATH (set -q KREW_ROOT; and echo $KREW_ROOT; or echo $HOME/.krew)/bin $PATH +end + +function kpod + set pattern "." + if test -n "$argv[1]" + set pattern "$argv[1]" + end + set -gx POD (kubectl get pods | grep "$pattern" | sort -R | head -n 1 | cut -d' ' -f1) + echo "Pod is $POD" +end + +function klogsf + if test -z "$POD" -o -n "$argv[1]" + kpod $argv + end + kubectl logs -f $POD +end + +function klogs + if test -z "$POD" -o -n "$argv[1]" + kpod $argv + end + kubectl logs $POD +end + +function kbash + if test -z "$POD" -o -n "$argv[1]" + kpod $argv + end + kubectl exec -it $POD -- /bin/bash +end + +function kshell + if test -z "$POD" -o -n "$argv[1]" + kpod $argv + end + kubectl exec -it $POD -- /bin/sh +end + +function kdesc + if test -z "$POD" -o -n "$argv[1]" + kpod $argv + end + kubectl describe pod $POD +end + +function kedit + if test -z "$POD" -o -n "$argv[1]" + kpod $argv + end + kubectl edit pod $POD +end + +function k8s::kubectl::config::contexts + kubectl config get-contexts | sed '1d; /\*/d' | awk '{ print $1 }' | sort +end +alias kcontexts="k8s::kubectl::config::contexts" + +function k8s::kubectl::config::use_context + kubectl config use-context (kubectl config get-contexts | sed '1d; /\*/d' | awk '{ print $1 }' | sort | fzf) +end +alias kcontext="k8s::kubectl::config::use_context" + +function k8s::kubectl::config::set_namespace + kubectl config set-context --current --namespace=(kubectl get ns | sed 1d | awk '{ print $1 }' | sort | fzf) +end +alias knamespace="k8s::kubectl::config::set_namespace" diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/quickedit.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/quickedit.fish new file mode 100644 index 00000000..c722acc6 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/quickedit.fish @@ -0,0 +1,93 @@ +set -gx QUICKEDIT_DIR ~/QuickEdit + +function quickedit::postaction + set -l file_path $argv[1] + set -l make_run 0 + + if test -f Makefile + make + set make_run 1 + end + + # Go to git toplevel dir (if exists) + cd (dirname $file_path) + set -l git_dir (git rev-parse --show-toplevel 2>/dev/null) + if test $status -eq 0 + cd $git_dir + end + if not test $make_run -eq 1 + if test -f Makefile + make + end + end + if test -d .git + git commit -a -m Update + git pull + git push + end +end + +function quickedit + set -l prev_dir (pwd) + set -l grep_pattern . + + if test (count $argv) -gt 0 + set grep_pattern $argv[1] + end + + cd $QUICKEDIT_DIR + set files (find -L . -type f -not -path '*/.*' | grep -E "$grep_pattern") + + switch (count $files) + case 0 + echo No result found + return + case 1 + set file_path $files[1] + case '*' + set file_path (printf '%s\n' $files | fzf) + end + + if editor::helix::open_with_lock $file_path + quickedit::postaction $file_path + end + + cd $prev_dir +end + +function quickedit::direct + set -l dir $argv[1] + set -l file $argv[2] + cd $dir + + if editor::helix::open_with_lock $file + quickedit::postaction $file + end + + cd - +end + +function quickedit::scratchpad + quickedit::direct ~/Notes Scratchpad.md +end + +function quickedit::quicknote + quickedit::direct ~/Notes QuickNote.md +end + +function quickedit::performance + quickedit::direct ~/Notes Performance.md +end + +abbr -a e quickedit +abbr -a scratch quickedit::scratchpad +abbr -a S quickedit::scratchpad +abbr -a quicknote quickedit::quicknote +abbr -a performance quickedit::performance +abbr -a goals quickedit::performance +abbr -a er "ranger $QUICKEDIT_DIR" +abbr -a cdquickedit "cd $QUICKEDIT_DIR" +abbr -a cdnotes 'cd ~/Notes' +abbr -a cdfish 'cd ~/.config/fish/conf.d' +abbr -a cddocs 'cd ~/Documents' +abbr -a cdocs 'cd ~/Documents' diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/supersync.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/supersync.fish new file mode 100644 index 00000000..356f773f --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/supersync.fish @@ -0,0 +1,114 @@ +set -x SUPERSYNC_STAMP_FILE ~/.supersync.last + +# Only sync the HabitsAndQuotes when it's asked for via function parameter +function supersync::worktime + set -l worktime_dir ~/git/worktime + + if not test -d $worktime_dir + echo "Warning: Directory $worktime_dir does not exist" + return 1 + end + cd $worktime_dir + + if test (count $argv) -gt 0 -a $argv[1] = sync_quotes + if test -d ~/Notes/HabitsAndQuotes + echo "" >work-wisdoms.md.tmp + for notes in ~/Notes/HabitsAndQuotes/{Productivity,Mentoring}.md + grep '^\* ' $notes >>work-wisdoms.md.tmp + end + sort -u work-wisdoms.md.tmp >work-wisdoms.md + rm work-wisdoms.md.tmp + git add work-wisdoms.md + grep '^\* ' ~/Notes/HabitsAndQuotes/Exercise.md >exercises.md + git add exercises.md + end + end + + find . -name '*.txt' -exec git add {} \; + find . -name '*.json' -exec git add {} \; + git commit -a -m sync + + git pull origin master + git push origin master + + cd - +end + +function supersync::uprecords + set -l uprecords_dir ~/git/uprecords + set -l uprecords_repo git@codeberg.org:snonux/uprecords.git + + if not test -d $uprecords_dir + git clone $uprecords_repo $uprecords_dir + cd $uprecords_dir + else + cd $uprecords_dir + git pull + end + + make update + git commit -a -m Update + git push + cd - +end + +function supersync::taskwarrior + if test -f ~/scripts/taskwarriorfeeder.rb + ruby ~/scripts/taskwarriorfeeder.rb + else + echo "No taskwarrior feeder script, skipping" + end + + taskwarrior::export + taskwarrior::export::gos + taskwarrior::import +end + +function supersync::gitsyncer + set enable_file ~/.gitsyncer_enable + set now (date +%s) + set weekly_interval (math 7 \* 24 \* 60 \* 60) + + if not test -f $enable_file + echo $now >$enable_file + else + set last_run (cat $enable_file) + if test (math $now - $last_run) -lt $weekly_interval + return + end + end + + if test -f ~/go/bin/gitsyncer + ~/go/bin/gitsyncer sync bidirectional && ~/go/bin/gitsyncer showcase + end + if test $status -eq 0 + date +%s >$enable_file + end +end + +function supersync + supersync::worktime sync_quotes + supersync::taskwarrior + supersync::worktime no_sync_quotes + supersync::uprecords + supersync::gitsyncer + + if test -f ~/.gos_enable + gos + end + + date +%s >$SUPERSYNC_STAMP_FILE.tmp + mv $SUPERSYNC_STAMP_FILE.tmp $SUPERSYNC_STAMP_FILE +end + +function supersync::is_it_time_to_sync + set -l max_age 86400 + set -l now (date +%s) + if test -f $SUPERSYNC_STAMP_FILE + set -l diff (math $now - (cat $SUPERSYNC_STAMP_FILE)) + if test $diff -lt $max_age + return 0 + end + end + read -P "It's time to run supersync! Run it? (y/n) " answer; and test "$answer" = y; and supersync +end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/taskwarrior.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/taskwarrior.fish new file mode 100644 index 00000000..d3192bcd --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/taskwarrior.fish @@ -0,0 +1,121 @@ +function taskwarrior::fuzzy::_select + sed -n '/^[0-9]/p' | sort -rn | fzf | cut -d' ' -f1 +end + +function taskwarrior::fuzzy::find + set -g TASK_ID (task ready | taskwarrior::fuzzy::_select) +end + +function taskwarrior::select + set -l task_id "$argv[1]" + if test -n "$task_id" + set -g TASK_ID "$task_id" + end + if test "$TASK_ID" = - -o -z "$TASK_ID" + taskwarrior::fuzzy::find + end +end + +function taskwarrior::due::count + set -l due_count (task status:pending due.before:now count) + + if test $due_count -gt 0 + echo "There are $due_count tasks due!" + end +end + +function taskwarrior::add::track + if test (count $argv) -gt 0 + task add priority:L +personal +track $argv + else + tasksamurai +track + end +end + +function taskwarrior::add::standup + if test (count $argv) -gt 0 + task add priority:L +work +standup +sre +nosched $argv + task add priority:L +work +standup +storage +nosched $argv + + if test -f ~/git/helpers/jira/jira.rb + echo "Do you want to raise a Jira ticket? (y/n)" + read -l user_input + if test "$user_input" = y + ruby ~/git/helpers/jira/jira.rb --raise "$argv" + end + end + + else + tasksamurai +standup + end +end + +function taskwarrior::add::standup::editor + set -l tmpfile (mktemp /tmp/standup.XXXXXX.txt) + $EDITOR $tmpfile + taskwarrior::add::standup (cat $tmpfile) +end + +function _taskwarrior::set_import_export_tags + if test (uname) = Darwin + set -gx TASK_IMPORT_TAG work + set -gx TASK_EXPORT_TAG personal + else + set -gx TASK_IMPORT_TAG personal + set -gx TASK_EXPORT_TAG work + end +end + +function taskwarrior::export::gos + task +share status:pending export >"$WORKTIME_DIR/tw-gos-export-$(date +%s).json" + yes | task +share status:pending delete +end + +function taskwarrior::export + _taskwarrior::set_import_export_tags + set -l count (task +$TASK_EXPORT_TAG status:pending count) + + if test $count -eq 0 + return + end + + echo "Exporting $count tasks to $TASK_EXPORT_TAG" + task +$TASK_EXPORT_TAG status:pending export >"$WORKTIME_DIR/tw-$TASK_EXPORT_TAG-export-$(date +%s).json" + yes | task +$TASK_EXPORT_TAG status:pending delete +end + +function taskwarrior::import + _taskwarrior::set_import_export_tags + + find $WORKTIME_DIR -name "tw-$TASK_IMPORT_TAG-export-*.json" | while read -l import + task import $import + rm $import + end + + find $WORKTIME_DIR -name "tw-(hostname)-export-*.json" | while read -l import + task import $import + rm $import + end +end + +abbr -a t task +abbr -a L 'task add +log' +abbr -a tlog 'task add +log' +abbr -a log 'task add +log' +abbr -a tdue 'tasksamurai status:pending due.before:now' +abbr -a thome 'tasksamurai +home' +abbr -a tasks 'tasksamurai -track' +abbr -a tread 'tasksamurai +read' +abbr -a track 'taskwarrior::add::track' +abbr -a tra 'taskwarrior::add::track' +abbr -a trat 'timr track' +abbr -a tfind 'taskwarrior::fuzzy::find' +abbr -a ts tasksamurai + +# Virtual standup abbrs +abbr -a V 'taskwarrior::add::standup' +abbr -a Vstorage 'tasksamurai +standup +storage' +abbr -a Vsre 'tasksamurai +standup +sre' +abbr -a Ved 'taskwarrior::add::standup::editor' + +taskwarrior::due::count diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/timr.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/timr.fish new file mode 100644 index 00000000..4f084454 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/timr.fish @@ -0,0 +1,25 @@ +function timr_prompt -d "Display timr timr_status in the prompt" + if command -v timr >/dev/null + set -l timr_status (timr prompt) + if test -n "$timr_status" + set -l icon (string sub -l 1 -- "$timr_status") + set -l time (string sub -s 2 -- "$timr_status") + if test "$icon" = "▶" + set_color green + else + set_color yellow + end + printf '%s' "$icon" + set_color normal + printf ' %s' "$time" + end + end +end + +complete -c timr -n __fish_use_subcommand -a start -d "Start the timer" +complete -c timr -n __fish_use_subcommand -a stop -d "Stop the timer" +complete -c timr -n __fish_use_subcommand -a pause -d "Pause the timer" +complete -c timr -n __fish_use_subcommand -a status -d "Show the timer status" +complete -c timr -n __fish_use_subcommand -a reset -d "Reset the timer" +complete -c timr -n __fish_use_subcommand -a live -d "Show the live timer" +complete -c timr -n __fish_use_subcommand -a prompt -d "Show the prompt status" diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/tmputils.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/tmputils.fish new file mode 100644 index 00000000..20a122ad --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/tmputils.fish @@ -0,0 +1,54 @@ +set -gx TMPUTILS_DIR ~/data/tmp +set -gx TMPUTILS_TMPFILE ~/.tmpfile + +function tmpls + if not test -d $TMPUTILS_DIR + return + end + ls $TMPUTILS_DIR +end + +function tmptee + set -l name $argv[1] + if test -z "$name" + set name (date +%s) + else + set -e argv[1] + end + set -l file "$TMPUTILS_DIR/$name" + if not test -d $TMPUTILS_DIR + mkdir -p $TMPUTILS_DIR + end + tee $argv $file + echo $file >$TMPUTILS_TMPFILE +end + +function tmpcat + set -l name $argv[1] + if test -z "$name" + cat (tmpfile) + return + end + cat "$TMPUTILS_DIR/$name" +end + +function tmpedit + set -l name $argv[1] + if test -z "$name" + $EDITOR (tmpfile) + return + end + $EDITOR "$TMPUTILS_DIR/$name" +end + +function tmpgrep + set -l name $argv[1] + set -e argv[1] + tmcpat $name | grep $argv +end + +function tmpfile + cat $TMPUTILS_TMPFILE +end + +abbr -a cdtmp "cd $TMPUTILS_DIR" diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/tmux.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/tmux.fish new file mode 100644 index 00000000..e65960e0 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/tmux.fish @@ -0,0 +1,94 @@ +function _tmux::cleanup_default + tmux list-sessions | string match -r '^T.*: ' | string match -v -r attached | string split ':' | while read -l s + echo "Killing $s" + tmux kill-session -t "$s" + end +end + +function _tmux::connect_command + set -l server_or_pod $argv[1] + if test -z "$TMUX_KEXEC" + echo "ssh -A -t $server_or_pod" + else + echo "kubectl exec -it $server_or_pod -- /bin/bash" + end +end + +function tmux::new + set -l session $argv[1] + _tmux::cleanup_default + if test -z "$session" + tmux::new (string join "" T (date +%s)) + else + tmux new-session -d -s $session + tmux -2 attach-session -t $session || tmux -2 switch-client -t $session + end +end + +function tmux::attach + set -l session $argv[1] + if test -z "$session" + tmux attach-session || tmux::new + else + tmux attach-session -t $session || tmux::new $session + end +end + +function tmux::remote + set -l server $argv[1] + tmux new -s $server "ssh -A -t $server 'tmux attach-session || tmux'" || tmux attach-session -d -t $server +end + +function tmux::search + set -l session (tmux list-sessions | fzf | cut -d: -f1) + if test -z "$TMUX" + tmux attach-session -t $session + else + tmux switch -t $session + end +end + +function tmux::cluster_ssh + if test -f "$argv[1]" + tmux::tssh_from_file $argv[1] + return + end + tmux::tssh_from_argument $argv +end + +function tmux::tssh_from_argument + set -l session $argv[1] + set first_server_or_container $argv[2] + set remaining_servers $argv[3..-1] + if test -z "$first_server_or_container" + set first_server_or_container $session + end + + tmux new-session -d -s $session (_tmux::connect_command "$first_server_or_container") + if not tmux list-session | grep "^$session:" + echo "Could not create session $session" + return 2 + end + for server_or_container in $remaining_servers + tmux split-window -t $session "tmux select-layout tiled; $(_tmux::connect_command "$server_or_container")" + end + tmux setw -t $session synchronize-panes on + tmux -2 attach-session -t $session || tmux -2 switch-client -t $session +end + +function tmux::tssh_from_file + set -l serverlist $argv[1] + set -l session (basename $serverlist | cut -d. -f1) + tmux::tssh_from_argument $session (awk '{ print $1 }' $serverlist | sed 's/.lan./.lan/g') +end + +alias tn 'tmux::new' +alias ta 'tmux::attach' +alias tx 'tmux::remote' +alias ts 'tmux::search' +alias tssh 'tmux::cluster_ssh' +alias tm tmux +alias tl 'tmux list-sessions' +alias foo 'tmux::new foo' +alias bar 'tmux::new bar' +alias baz 'tmux::new baz' diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/update.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/update.fish new file mode 100644 index 00000000..935b6302 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/update.fish @@ -0,0 +1,75 @@ +function update::tools + set pids + + echo "Installing/updating gofumpt" + go install mvdan.cc/gofumpt@latest & + set -a pids $last_pid + + echo "Installing/updating mage" + go install github.com/magefile/mage@latest & + set -a pids $last_pid + + echo "Installing/updating golangci-lint" + go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@latest & + set -a pids $last_pid + + echo "Installing/updating goimports" + go install golang.org/x/tools/cmd/goimports@latest & + set -a pids $last_pid + + for prog in hexai hexai-lsp hexai-tmux-action + echo "Installing/updating $prog from codeberg.org/snonux/hexai/cmd/$prog@latest" + go install codeberg.org/snonux/hexai/cmd/$prog@latest & + set -a pids $last_pid + end + + for prog in tasksamurai timr + echo "Installing/updating $prog from codeberg.org/snonux/$prog/cmd/$prog@latest" + go install codeberg.org/snonux/$prog/cmd/$prog@latest & + set -a pids $last_pid + end + + if test (uname) = Darwin + echo 'Updating cursor-agent on macOS' + cursor-agent update + end + set -a pids $last_pid + + if test (uname) = Linux + echo "Installing/updating tgpt" + go install github.com/aandrew-me/tgpt/v2@latest & + set -a pids $last_pid + + for prog in gos gitsyncer + echo "Installing/updating $prog from codeberg.org/snonux/$prog/cmd/$prog@latest" + go install codeberg.org/snonux/$prog/cmd/$prog@latest + end + + echo "Installing/updating @anthropic-ai/claude-code globally via npm" + doas npm uninstall -g @anthropic-ai/claude-code + doas npm install -g @anthropic-ai/claude-code + + # doas npm uninstall -g @qwen-code/qwen-code@latest + # doas npm install -g @qwen-code/qwen-code@latest + + echo "Installing/updating @openai/codex globally via npm" + doas npm uninstall -g @openai/codex + doas npm install -g @openai/codex + + echo "Installing/updating @google/gemini-cli globally via npm" + doas npm uninstall -g @google/gemini-cli + doas npm install -g @google/gemini-cli + + # echo "Installing/updating @sourcegraph/amp globally via npm" + # doas npm uninstall -g @sourcegraph/amp + # doas npm install -g @sourcegraph/amp + + echo "Installing/updating opencode-ai globally via npm" + doas npm uninstall -g opencode-ai + doas npm install -g opencode-ai + end + + for pid in $pids + wait $pid + end +end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/utils.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/utils.fish new file mode 100644 index 00000000..0f112177 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/utils.fish @@ -0,0 +1,142 @@ +function fullest_h + df -h | sort -n -k 5 +end + +function fullest_i + df -i | sort -n -k 5 +end + +function usortn + sort | uniq -c | sort -n +end + +function asum + awk '{ sum += $1 } END { print sum }' +end + +function stop + set -l service $argv[1] + sudo service $service stop $argv +end + +function start + set -l service $argv[1] + sudo service $service start $argv +end + +function restart + set -l service $argv[1] + sudo service $service restart $argv +end + +function statuss + set -l service $argv[1] + sudo service $service status $argv +end + +function loop + set -l sleep 10 + if set -q SLEEP + set sleep $SLEEP + end + echo "sleep is $sleep" 1>&2 + while true + $argv + sleep $sleep + end +end + +function f + find . -iname "*$argv*" +end + +function random + set -l upto $argv[1] + set -l random (math $RANDOM % $upto) + echo "Sleeping $random seconds" + sleep $random +end + +function dedup + set -l file $argv[1] + if test -z $file + awk '{ if (line[$0] != 42) { print $0 }; line[$0] = 42; }' + else + awk '{ if (line[$0] != 42) { print $0 }; line[$0] = 42; }' $file | sudo tee $file.dedup >/dev/null + if test ! -f $file.dedupbak + sudo mv $file $file.dedupbak + end + sudo mv $file.dedup $file + wc -l $file $file.dedupbak + sudo gzip --best $file.dedupbak & + end +end + +function dedup_no_bak + set -l file $argv[1] + if test -z $file + awk '{ if (line[$0] != 42) { print $0 }; line[$0] = 42; }' + else + awk '{ if (line[$0] != 42) { print $0 }; line[$0] = 42; }' $file | sudo tee $file.dedup >/dev/null + if test ! -f $file.dedupbak + sudo mv $file $file.dedupbak + end + sudo mv $file.dedup $file + wc -l $file $file.dedupbak + sudo rm -v $file.dedupbak & + end +end + +function drop_caches + echo 3 | sudo tee /proc/sys/vm/drop_caches +end + +function ssl_connect + set -l address $argv[1] + openssl s_client -connect $address +end + +function ssl_dates + ssl_connect $argv | openssl x509 -noout -dates +end + +function lastu + last | grep -E -v '(root|cron|nagios)' +end + +function lastl + lastu | less +end + +abbr wetter 'curl http://wttr.in' + +abbr tf terraform + +function touchtype + tt --noskip --noreport --showwpm --bold --theme (tt -list themes | sort -R | head -n1) $argv +end + +function touchtype::quote + while true + touchtype -quotes en + sleep 0.2 + end +end + +abbr typing 'touchtype::quote' + +function sway_config_view + less /etc/sway/config +end + +function ssh::force + set -l server $argv[1] + ssh-keygen -R $server + ssh -A $server +end + +if test -f ~/git/geheim/geheim.rb + function geheim + ruby ~/git/geheim/geheim.rb $argv + end +end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/worktime.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/worktime.fish new file mode 100644 index 00000000..f2f7f5d6 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/worktime.fish @@ -0,0 +1,122 @@ +set -gx WORKTIME_DIR ~/git/worktime + +if test (uname) = Darwin -a ! -f ~/.wtloggedin + echo "Warn: Not logged in, run wtlogin" +end + +function worktime + ruby $WORKTIME_DIR/worktime.rb $argv +end + +function worktime::sync + cd $WORKTIME_DIR + git commit -a -m sync + git pull + git push + cd - +end + +function worktime::wisdom_reminder + if test -f $WORKTIME_DIR/work-wisdoms.md + sed -n '/^\* / { s/\* //; p; }' $WORKTIME_DIR/work-wisdoms.md | sort -R | head -n 1 + end +end + +function worktime::report + if test -f ~/.wtloggedin + if test -f ~/.wtmaster + worktime --report | tee $WORKTIME_DIR/report.txt + else + worktime --report + end + worktime::wisdom_reminder + end +end + +function worktime::add + set -l seconds $argv[1] + set -l what $argv[2] + set -l descr $argv[3] + set -l epoch (date +%s) + + if test -z "$what" + set what work + end + + if test -z "$descr" + worktime --add $seconds --epoch $epoch --what $what + else + worktime --add $seconds --epoch $epoch --what $what --descr "$descr" + end + + worktime::report +end + +function worktime::log + set -l seconds $argv[1] + set -l what $argv[2] + set -l epoch (date +%s) + + if test -z "$what" + set what work + end + + worktime --log --epoch $epoch --what $what + worktime::report +end + +function worktime::login + set -l what $argv[1] + if test -z "$what" + set what work + end + touch ~/.wtloggedin + worktime --login --what $what + worktime::wisdom_reminder +end + +function worktime::logout + set -l what $argv[1] + + if test -z "$what" + set what work + end + + if test -f ~/.wtloggedin + rm ~/.wtloggedin + end + + worktime --logout --what $what + worktime::report +end + +function worktime::status + worktime::report + + if test -f ~/.wtloggedin + echo "You are logged in" + set -l num_worklog (ls $WORKTIME_DIR | grep wl- | wc -l) + if test $num_worklog -gt 0 + echo "$num_worklog entries in the worklog" + end + else + echo "You are not logged in" + end +end + +abbr -a cdworktime "cd $WORKTIME_DIR" +abbr -a wt worktime +abbr -a wtedit 'worktime --edit' +abbr -a wtreport 'worktime --report' +abbr -a wtadd 'worktime::add' +abbr -a wtlog 'worktime::log' +abbr -a wtlogin 'worktime::login' +abbr -a wtlogout 'worktime::logout' +abbr -a wtstatus 'worktime::status' +abbr -a wtsync 'worktime::sync' +abbr -a wtf 'worktime --report' +abbr -a random_exercise "sort -R $WORKTIME_DIR/exercises.md | head -n 1" +abbr -a random_exercises "sort -R $WORKTIME_DIR/exercises.md | head -n 10" +abbr -a wl 'task add +work' +abbr -a ql 'task add +personal' +abbr -a pl 'task add +personal' diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/zoxide.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/zoxide.fish new file mode 100644 index 00000000..8fbd5d61 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/zoxide.fish @@ -0,0 +1,6 @@ +if type -q zoxide + echo Sourcing zoxide for fish shell... + zoxide init fish | source +else + echo "zoxide not installed?" +end diff --git a/gemfeed/examples/conf/dotfiles/fish/conf.d/zsh.fish b/gemfeed/examples/conf/dotfiles/fish/conf.d/zsh.fish new file mode 100644 index 00000000..06174d84 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/fish/conf.d/zsh.fish @@ -0,0 +1,12 @@ +# To run a ZSH function in fish, you can use the following function. +function Z + touch ~/.nofish + zsh -i -c "$argv" + rm ~/.nofish +end + +function B + touch ~/.nofish + bash -i -c "$argv" + rm ~/.nofish +end diff --git a/gemfeed/examples/conf/dotfiles/ghostty/config b/gemfeed/examples/conf/dotfiles/ghostty/config new file mode 100644 index 00000000..e1095832 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/ghostty/config @@ -0,0 +1,17 @@ +window-decoration = true +copy-on-select = true +quick-terminal-position = bottom +quick-terminal-screen = mouse +shell-integration = zsh +bold-is-bright = true + +# Toggle window decorations only works on Linux! +keybind = ctrl+shift+d=toggle_window_decorations +keybind = ctrl+shift+f=toggle_fullscreen +keybind = ctrl+shift+g=reload_config +# Toggle quick terminal only supported for MacOS +keybind = global:ctrl+shift+t=toggle_quick_terminal +keybind = ctrl+shift+c=copy_to_clipboard +keybind = ctrl+shift+v=paste_from_clipboard +keybind = ctrl+shift+w=paste_from_selection + diff --git a/gemfeed/examples/conf/dotfiles/gitsyncer/config.json b/gemfeed/examples/conf/dotfiles/gitsyncer/config.json new file mode 100644 index 00000000..3ebb7780 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/gitsyncer/config.json @@ -0,0 +1,33 @@ +{ + "organizations": [ + { + "host": "git@codeberg.org", + "name": "snonux" + }, + { + "host": "git@github.com", + "name": "snonux" + }, + { + "host": "paul@t450:git", + "backupLocation": true + } + ], + "repositories": [], + "skip_releases": { + "fapi": [ + "0.0.1" + ] + }, + "exclude_from_showcase": [ + "bratwurstmitsenf", + "Adv360-Pro-ZMK", + "katana", + "playground", + "pages", + "nvim" + ], + "exclude_branches": [ + "^codex/" + ] +}
\ No newline at end of file diff --git a/gemfeed/examples/conf/dotfiles/helix/config.toml b/gemfeed/examples/conf/dotfiles/helix/config.toml new file mode 100644 index 00000000..0d96c3ff --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/helix/config.toml @@ -0,0 +1,87 @@ +theme = "adwaita-dark" + +[editor] +bufferline = "always" +rulers = [80, 100, 120, 140] +line-number = "relative" +mouse = true +cursorline = true +cursorcolumn = true +continue-comments = false +completion-timeout = 2000 + +[editor.soft-wrap] +enable = true + +[editor.inline-diagnostics] +# cursor-line = "hint" + +[editor.auto-save] +focus-lost = true +after-delay.timeout = 3000 +after-delay.enable = true + +[editor.statusline] +left = ["version-control", "mode", "spinner", "file-name", "position" ] +center = ["diagnostics"] +right = ["selections", "file-encoding", "file-line-ending", "file-type"] + +[editor.lsp] +display-messages = true +display-inlay-hints = false + +[editor.cursor-shape] +normal = "block" +insert = "underline" +select = "bar" + +[editor.whitespace.render] +space = "none" +tab = "none" +newline = "none" + +[keys.normal] +D = ["ensure_selections_forward", "extend_to_line_end"] +S = ["ensure_selections_forward", "extend_to_line_start"] +0 = ["select_mode", "extend_to_file_start"] +G = ["ensure_selections_forward", "extend_to_file_end"] +"^" = ["move_prev_word_start", "move_next_word_end", "search_selection", "global_search"] +"ret" = "goto_word" + +C-c = "yank_main_selection_to_clipboard" +C-v = { b = "paste_clipboard_before", a = "paste_clipboard_after", r = ":clipboard-paste-replace" } +A-c = "toggle_comments" # Was originally C-c, so mapped to ALT now + +# Helix related helpers +C-h = { c = ":config-open", r = ":config-reload", C = ":run-shell-command cp -v ~/.config/helix/*.toml ~/git/conf/dotfiles/helix/", l = ":open ~/.config/helix/languages.toml", h = ":open ~/git/worktime/HelixCheat.md", L = ":log-open", d = ":theme default" } + +C-r = [ ":config-reload", ":reload-all" ] + +C-u = [ ":write", ":run-shell-command sh -c 'source ~/.hx.remote.source; scp $LOCAL_PATH $REMOTE_URI && echo Uploaded to $REMOTE_URI || echo Failed uploading to $REMOTE_URI'"] + +# Various helpers +C-s = { e = ":set-option soft-wrap.enable true", d = ":set-option soft-wrap.enable false", s = "save_selection" } + +# Buffer stuff +C-q = ":buffer-close" + +# AI commands are good here. +C-p = { c = ":pipe ai correct this sentence and only print out the corrected text", r = ":pipe ai restructure and reword the input and dont leave information out and only print out the new text", a = ":pipe ai rewrite this in a more casual style", n = ":pipe ai these are book notes of mine. correct the grammar and re-organize the notes. use bullet points for short information and whole paragraphs for longer one. the output must be in Gemini Gemtext format with the star * as the bullet point symbol and not the minus - . dont leave out any content.", p = ":pipe ai" } +# Will replace the above +C-a = ":pipe hexai-tmux-action" + +# Git commands +C-g = { d = ":run-shell-command git diff", p = ":run-shell-command git pull", u = ":run-shell-command git push", t = ":run-shell-command tmux new-window -n hx-git-tig tig", c = ":run-shell-command tmux split-window -v 'git commit -a'" } + +# Build commands +C-l = { m = ":run-shell-command make", d = ":run-shell-command go-task dev", r = ":run-shell-command tmux new-window -n hx-go-task-run 'go-task run'" } + +[keys.normal.space] +B = "file_picker_in_current_buffer_directory" +Q = [ ":cd ~/QuickEdit", "file_picker_in_current_directory" ] + +[keys.select] +"{" = "goto_prev_paragraph" +"}" = "goto_next_paragraph" +n = ["extend_search_next", "merge_selections"] +N = ["extend_search_prev", "merge_selections"] diff --git a/gemfeed/examples/conf/dotfiles/helix/languages.toml b/gemfeed/examples/conf/dotfiles/helix/languages.toml new file mode 100644 index 00000000..60e6a19c --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/helix/languages.toml @@ -0,0 +1,203 @@ +[[language]] +name = "hcl" +scope = "source.hcl" +injection-regex = "(hcl|tf|nomad)" +language-id = "terraform" +file-types = ["hcl", "tf", "nomad"] +comment-token = "#" +block-comment-tokens = { start = "/*", end = "*/" } +indent = { tab-width = 2, unit = " " } +language-servers = [ "terraform-ls", "hexai-lsp" ] +auto-format = true + +[[language]] +name = "go" +auto-format= true +diagnostic-severity = "hint" +formatter = { command = "hx.goformatter" } +language-servers = [ "gopls", "golangci-lint-lsp", "hexai-lsp" ] +[language-server.hexai-lsp] +command = "hexai-lsp" + +[language-server.gopls] +command = "gopls" + +[language-server.gopls.config.hints] +assignVariableTypes = true +compositeLiteralFields = true +constantValues = true +functionTypeParameters = true +parameterNames = true +rangeVariableTypes = true + +# go install github.com/nametake/golangci-lint-langserver@latest │ +[language-server.golangci-lint-lsp] +command = "golangci-lint-langserver" + +# golangci-lint-langserver depepds/calls golangci-lint +# go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest +[language-server.golangci-lint-lsp.config] +command = ["golangci-lint", "run", "--issues-exit-code=1"] +# command = ["golangci-lint", "run", "--out-format", "json", "--issues-exit-code=1"] + +[[language]] +name = "c" +scope = "source.c" +injection-regex = "c" +file-types = ["c", "h"] +comment-token = "//" +language-servers = [ "clangd", "hexai-lsp" ] +indent = { tab-width = 2, unit = " " } + +[[grammar]] +name = "c" +source = { git = "https://github.com/tree-sitter/tree-sitter-c", rev = "7175a6dd5fc1cee660dce6fe23f6043d75af424a" } + +[language-server.clangd] +command = "clangd" + +[[language]] +name = "perl" +auto-format= true +formatter = { command = "perltidy", args = ["-l=120"] } +scope = "source.perl" +file-types = ["pl", "pm", "t", "psgi", "raku", "rakumod", "rakutest", "rakudoc", "nqp", "p6", "pl6", "pm6", { glob = "Rexfile" }] +shebangs = ["perl"] +comment-token = "#" +language-servers = [ "perlnavigator", "hexai-lsp" ] +indent = { tab-width = 2, unit = " " } + +[[grammar]] +name = "perl" +source = { git = "https://github.com/tree-sitter-perl/tree-sitter-perl", rev = "e99bb5283805db4cb86c964722d709df21b0ac16" } + +[[language]] +name = "pod" +scope = "source.pod" +injection-regex = "pod" +file-types = ["pod"] + +[[grammar]] +name = "pod" +source = { git = "https://github.com/tree-sitter-perl/tree-sitter-pod", rev = "39da859947b94abdee43e431368e1ae975c0a424" } + +[[language]] +name = "ruby" +auto-format = true +scope = "source.ruby" +injection-regex = "ruby" +file-types = [ + "rb", + "rbs", + "rake", + "irb", + "gemspec", + { glob = "Gemfile" }, + { glob = "Rakefile" } +] +shebangs = ["ruby"] +comment-token = "#" +language-servers = [ "ruby-lsp", "solargraph", "rubocop", "hexai-lsp" ] +indent = { tab-width = 2, unit = " " } + +[[grammar]] +name = "ruby" +source = { git = "https://github.com/tree-sitter/tree-sitter-ruby", rev = "206c7077164372c596ffa8eaadb9435c28941364" } + +[[language]] +name = "bash" +scope = "source.bash" +injection-regex = "(shell|bash|zsh|sh)" +file-types = [ + "sh", + "bash", + "zsh", + "zshenv", + "zlogin", + "zlogout", + "zprofile", + "zshrc", + "eclass", + "ebuild", + "bazelrc", + "Renviron", + "zsh-theme", + "ksh", + "cshrc", + "tcshrc", + "bashrc_Apple_Terminal", + "zshrc_Apple_Terminal", + { glob = "*zshrc*" }, +] +shebangs = ["sh", "bash", "dash", "zsh"] +comment-token = "#" +language-servers = [ "bash-language-server", "hexai-lsp" ] +indent = { tab-width = 2, unit = " " } + +[[language]] +name = "fish" +# scope = "source.fish" +# injection-regex = "(fish)" +# file-types = [ +# "fish", +# ] +# shebangs = ["fish" ] +# comment-token = "#" +language-servers = [ "fish-lsp", "hexai-lsp" ] +# indent =dth = 4, unit = " " } + +[[grammar]] +name = "bash" +source = { git = "https://github.com/tree-sitter/tree-sitter-bash", rev = "275effdfc0edce774acf7d481f9ea195c6c403cd" } + +[language-server] +bash-language-server = { command = "bash-language-server", args = ["start"] } +vale-ls = { command = "vale-ls" } +ruby-lsp = { command = "ruby-lsp"} +rubocop = { command = "rubocop", args = ["--lsp"] } + +[[language]] +name = "markdown" +scope = "source.md" +injection-regex = "md|markdown" +file-types = ["md", "markdown", "mkd", "mdwn", "mdown", "markdn", "mdtxt", "mdtext", "workbook", "gmi", "tpl", "txt" ] +roots = [".marksman.toml"] +language-servers = [ "marksman", "markdown-oxide", "vale-ls", "hexai-lsp"] +indent = { tab-width = 2, unit = " " } + +[[grammar]] +name = "markdown" +source = { git = "https://github.com/MDeiml/tree-sitter-markdown", rev = "aaf76797aa8ecd9a5e78e0ec3681941de6c945ee", subpath = "tree-sitter-markdown" } + +[[language]] +name = "markdown.inline" +scope = "source.markdown.inline" +injection-regex = "markdown\\.inline" +file-types = [] +grammar = "markdown_inline" + +[[grammar]] +name = "markdown_inline" +source = { git = "https://github.com/MDeiml/tree-sitter-markdown", rev = "aaf76797aa8ecd9a5e78e0ec3681941de6c945ee", subpath = "tree-sitter-markdown-inline" } + +[[language]] +name = "gemini" +scope = "source.gmi" +file-types = ["gmi", "tpl"] + +[[grammar]] +name = "gemini" +source = { git = "https://git.sr.ht/~nbsp/tree-sitter-gemini", rev = "3cc5e4bdf572d5df4277fc2e54d6299bd59a54b3" } + +[[language]] +name = "java" +scope = "source.java" +injection-regex = "java" +file-types = ["java", "jav", "pde"] +roots = ["pom.xml", "build.gradle", "build.gradle.kts"] +language-servers = [ "jdtls", "hexai-lsp" ] +indent = { tab-width = 2, unit = " " } + +[[grammar]] +name = "java" +source = { git = "https://github.com/tree-sitter/tree-sitter-java", rev = "09d650def6cdf7f479f4b78f595e9ef5b58ce31e" } diff --git a/gemfeed/examples/conf/dotfiles/nvim/init.lua b/gemfeed/examples/conf/dotfiles/nvim/init.lua new file mode 100644 index 00000000..c3b8701d --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/nvim/init.lua @@ -0,0 +1,70 @@ + +require("CopilotChat").setup { + -- See Configuration section for options +} + +local timer = vim.loop.new_timer() -- Initialize the timer + +vim.api.nvim_create_autocmd("BufEnter", { + pattern = "*", + callback = function() + if vim.bo.filetype == "copilot-chat" then + local copilot_chat_buf = vim.api.nvim_get_current_buf() + vim.cmd("wincmd _") -- Maximize height + vim.cmd("wincmd |") -- Maximize width + local file_path = vim.fn.expand("~/.copilot_chat_output.txt") + + -- Start the timer with a 2-second interval + timer:start(1000, 1000, vim.schedule_wrap(function() + if copilot_chat_buf and vim.api.nvim_buf_is_valid(copilot_chat_buf) then + -- Get all lines in the buffer + local lines = vim.api.nvim_buf_get_lines(copilot_chat_buf, 0, -1, false) + + -- Check for the stopping condition + local user_line_count = 0 + for _, line in ipairs(lines) do + if line:find("^## User") then + user_line_count = user_line_count + 1 + if user_line_count >= 2 then + print("Stopping write process: Two '## User' lines detected.") + timer:stop() + -- Write the buffer content to the file + vim.api.nvim_buf_call(copilot_chat_buf, function() + vim.cmd("write! " .. file_path) + end) + vim.cmd("qa!") + return + end + end + end + + -- Write the buffer content to the file + vim.api.nvim_buf_call(copilot_chat_buf, function() + vim.cmd("write! " .. file_path) + end) + end + end)) + end + end, +}) + +vim.api.nvim_create_user_command('CopilotAsk', function(args) + local chat = require("CopilotChat") + local input + if args.args and args.args ~= "" then + input = args.args + else + local input_file = os.getenv("HOME") .. "/.copilot_chat_input.txt" + local file = io.open(input_file, "r") + if file then + input = file:read("*all") + file:close() + else + print("Error: Unable to open input file.") + return + end + end + chat.ask(input) +end, { force = true, range = true, nargs = "?" }) + + diff --git a/gemfeed/examples/conf/dotfiles/pipewire/pipewire.conf b/gemfeed/examples/conf/dotfiles/pipewire/pipewire.conf new file mode 100644 index 00000000..a97c99e7 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/pipewire/pipewire.conf @@ -0,0 +1,257 @@ +# Daemon config file for PipeWire version "0.3.51" # +# +# Copy and edit this file in /etc/pipewire for system-wide changes +# or in ~/.config/pipewire for local changes. +# +# It is also possible to place a file with an updated section in +# /etc/pipewire/pipewire.conf.d/ for system-wide changes or in +# ~/.config/pipewire/pipewire.conf.d/ for local changes. +# + +context.properties = { + ## Configure properties in the system. + #library.name.system = support/libspa-support + #context.data-loop.library.name.system = support/libspa-support + #support.dbus = true + #link.max-buffers = 64 + link.max-buffers = 16 # version < 3 clients can't handle more + #mem.warn-mlock = false + #mem.allow-mlock = true + #mem.mlock-all = false + #clock.power-of-two-quantum = true + #log.level = 2 + #cpu.zero.denormals = false + + core.daemon = true # listening for socket connections + core.name = pipewire-0 # core name and socket name + + ## Properties for the DSP configuration. + default.clock.rate = 48000 + default.clock.allowed-rates = [ 44100 48000 88200 96000 176400 192000 352800 384000 ] + #default.clock.quantum = 1024 + default.clock.min-quantum = 16 + #default.clock.max-quantum = 2048 + #default.clock.quantum-limit = 8192 + #default.video.width = 640 + #default.video.height = 480 + #default.video.rate.num = 25 + #default.video.rate.denom = 1 + # + #settings.check-quantum = false + #settings.check-rate = false + # + # These overrides are only applied when running in a vm. + vm.overrides = { + default.clock.min-quantum = 1024 + } +} + +context.spa-libs = { + #<factory-name regex> = <library-name> + # + # Used to find spa factory names. It maps an spa factory name + # regular expression to a library name that should contain + # that factory. + # + audio.convert.* = audioconvert/libspa-audioconvert + api.alsa.* = alsa/libspa-alsa + api.v4l2.* = v4l2/libspa-v4l2 + api.libcamera.* = libcamera/libspa-libcamera + api.bluez5.* = bluez5/libspa-bluez5 + api.vulkan.* = vulkan/libspa-vulkan + api.jack.* = jack/libspa-jack + support.* = support/libspa-support + #videotestsrc = videotestsrc/libspa-videotestsrc + #audiotestsrc = audiotestsrc/libspa-audiotestsrc +} + +context.modules = [ + #{ name = <module-name> + # [ args = { <key> = <value> ... } ] + # [ flags = [ [ ifexists ] [ nofail ] ] + #} + # + # Loads a module with the given parameters. + # If ifexists is given, the module is ignored when it is not found. + # If nofail is given, module initialization failures are ignored. + # + + # Uses realtime scheduling to boost the audio thread priorities. This uses + # RTKit if the user doesn't have permission to use regular realtime + # scheduling. + { name = libpipewire-module-rt + args = { + nice.level = -11 + #rt.prio = 88 + #rt.time.soft = -1 + #rt.time.hard = -1 + } + flags = [ ifexists nofail ] + } + + # The native communication protocol. + { name = libpipewire-module-protocol-native } + + # The profile module. Allows application to access profiler + # and performance data. It provides an interface that is used + # by pw-top and pw-profiler. + { name = libpipewire-module-profiler } + + # Allows applications to create metadata objects. It creates + # a factory for Metadata objects. + { name = libpipewire-module-metadata } + + # Creates a factory for making devices that run in the + # context of the PipeWire server. + { name = libpipewire-module-spa-device-factory } + + # Creates a factory for making nodes that run in the + # context of the PipeWire server. + { name = libpipewire-module-spa-node-factory } + + # Allows creating nodes that run in the context of the + # client. Is used by all clients that want to provide + # data to PipeWire. + { name = libpipewire-module-client-node } + + # Allows creating devices that run in the context of the + # client. Is used by the session manager. + { name = libpipewire-module-client-device } + + # The portal module monitors the PID of the portal process + # and tags connections with the same PID as portal + # connections. + { name = libpipewire-module-portal + flags = [ ifexists nofail ] + } + + # The access module can perform access checks and block + # new clients. + { name = libpipewire-module-access + args = { + # access.allowed to list an array of paths of allowed + # apps. + #access.allowed = [ + # /usr/bin/pipewire-media-session + #] + + # An array of rejected paths. + #access.rejected = [ ] + + # An array of paths with restricted access. + #access.restricted = [ ] + + # Anything not in the above lists gets assigned the + # access.force permission. + #access.force = flatpak + } + } + + # Makes a factory for wrapping nodes in an adapter with a + # converter and resampler. + { name = libpipewire-module-adapter } + + # Makes a factory for creating links between ports. + { name = libpipewire-module-link-factory } + + # Provides factories to make session manager objects. + { name = libpipewire-module-session-manager } + + # Use libcanberra to play X11 Bell + #{ name = libpipewire-module-x11-bell + # args = { + # #sink.name = "" + # #sample.name = "bell-window-system" + # #x11.display = null + # #x11.xauthority = null + # } + #} +] + +context.objects = [ + #{ factory = <factory-name> + # [ args = { <key> = <value> ... } ] + # [ flags = [ [ nofail ] ] + #} + # + # Creates an object from a PipeWire factory with the given parameters. + # If nofail is given, errors are ignored (and no object is created). + # + #{ factory = spa-node-factory args = { factory.name = videotestsrc node.name = videotestsrc Spa:Pod:Object:Param:Props:patternType = 1 } } + #{ factory = spa-device-factory args = { factory.name = api.jack.device foo=bar } flags = [ nofail ] } + #{ factory = spa-device-factory args = { factory.name = api.alsa.enum.udev } } + #{ factory = spa-node-factory args = { factory.name = api.alsa.seq.bridge node.name = Internal-MIDI-Bridge } } + #{ factory = adapter args = { factory.name = audiotestsrc node.name = my-test } } + #{ factory = spa-node-factory args = { factory.name = api.vulkan.compute.source node.name = my-compute-source } } + + # A default dummy driver. This handles nodes marked with the "node.always-driver" + # property when no other driver is currently active. JACK clients need this. + { factory = spa-node-factory + args = { + factory.name = support.node.driver + node.name = Dummy-Driver + node.group = pipewire.dummy + priority.driver = 20000 + } + } + { factory = spa-node-factory + args = { + factory.name = support.node.driver + node.name = Freewheel-Driver + priority.driver = 19000 + node.group = pipewire.freewheel + node.freewheel = true + } + } + # This creates a new Source node. It will have input ports + # that you can link, to provide audio for this source. + #{ factory = adapter + # args = { + # factory.name = support.null-audio-sink + # node.name = "my-mic" + # node.description = "Microphone" + # media.class = "Audio/Source/Virtual" + # audio.position = "FL,FR" + # } + #} + + # This creates a single PCM source device for the given + # alsa device path hw:0. You can change source to sink + # to make a sink in the same way. + #{ factory = adapter + # args = { + # factory.name = api.alsa.pcm.source + # node.name = "alsa-source" + # node.description = "PCM Source" + # media.class = "Audio/Source" + # api.alsa.path = "hw:0" + # api.alsa.period-size = 1024 + # api.alsa.headroom = 0 + # api.alsa.disable-mmap = false + # api.alsa.disable-batch = false + # audio.format = "S16LE" + # audio.rate = 48000 + # audio.channels = 2 + # audio.position = "FL,FR" + # } + #} +] + +context.exec = [ + #{ path = <program-name> [ args = "<arguments>" ] } + # + # Execute the given program with arguments. + # + # You can optionally start the session manager here, + # but it is better to start it as a systemd service. + # Run the session manager with -h for options. + # + #{ path = "/usr/bin/pipewire-media-session" args = "" } + # + # You can optionally start the pulseaudio-server here as well + # but it is better to start it as a systemd service. + # It can be interesting to start another daemon here that listens + # on another address with the -a option (eg. -a tcp:4713). + # + #{ path = "/usr/bin/pipewire" args = "-c pipewire-pulse.conf" } +] diff --git a/gemfeed/examples/conf/dotfiles/scripts/README.md b/gemfeed/examples/conf/dotfiles/scripts/README.md new file mode 100644 index 00000000..ecbc8ec0 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/README.md @@ -0,0 +1,3 @@ +# Scripts installed to my ~/scripts + +Mostly quick-n-dirty ones! diff --git a/gemfeed/examples/conf/dotfiles/scripts/ai b/gemfeed/examples/conf/dotfiles/scripts/ai new file mode 100755 index 00000000..abcf4909 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/ai @@ -0,0 +1,7 @@ +#!/usr/bin/env zsh + +if [ $(uname) = Darwin ]; then + exec hx.nvim-copilot-prompt "$@" +else + exec hx.hexai-prompt "$@" +fi diff --git a/gemfeed/examples/conf/dotfiles/scripts/brokenlinkfinder b/gemfeed/examples/conf/dotfiles/scripts/brokenlinkfinder new file mode 100644 index 00000000..7fe15765 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/brokenlinkfinder @@ -0,0 +1,73 @@ +#!/usr/bin/env ruby + +require 'net/http' +require 'uri' +require 'nokogiri' +require 'set' + +# Method to fetch and parse HTML from a URL +def fetch_html(url) + response = Net::HTTP.get_response(URI(url)) + response.body if response.is_a?(Net::HTTPSuccess) +rescue StandardError => e + puts "Error fetching #{url}: #{e.message}" + nil +end + +# Method to find and check links on a page +def check_links(url, domain) + html = fetch_html(url) + return unless html + + checked = Set.new + broken = Set.new + + document = Nokogiri::HTML(html) + links = document.css('a').map { |link| link['href'] }.compact + + internal_links = links.select do |link| + link.start_with?('/') || link.start_with?('./') || URI(link).host == domain + end + puts "Internal links: #{internal_links}" + + internal_links.uniq.each do |link| + full_url = link.start_with?('/') || link.start_with?('./') ? "#{url}#{link}" : link + full_url.sub!('./', '/') + next if checked.include?(full_url) + + broken << full_url unless check_link(full_url) + checked << full_url + end + + broken +end + +# Method to check if a link is broken +def check_link(url) + uri = URI(url) + response = Net::HTTP.get_response(uri) + + if response.is_a?(Net::HTTPSuccess) + puts "Working link: #{url}" + true + else + puts "Broken link: #{url} (HTTP #{response.code})" + false + end +rescue StandardError => e + puts "Error checking #{url}: #{e.message}" + false +end + +# Main program +if ARGV.length != 1 + puts 'Usage: ruby brokenlinkfinder.rb <URL>' + exit +end + +start_url = ARGV.first +domain = URI(start_url).host + +check_links(start_url, domain).each do |broken| + puts "Broken: #{broken}" +end diff --git a/gemfeed/examples/conf/dotfiles/scripts/gvim b/gemfeed/examples/conf/dotfiles/scripts/gvim new file mode 100755 index 00000000..5777a7ce --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/gvim @@ -0,0 +1,7 @@ +#!/bin/bash +# Hack so qutebrowser starts an editor (Helix) in a new ghostty terminal. + +declare -r FILE_PATH="$2" +#echo "$@" > /tmp/params.txt + +ghostty -e "hx $FILE_PATH" diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.aichat-prompt b/gemfeed/examples/conf/dotfiles/scripts/hx.aichat-prompt new file mode 100755 index 00000000..4cafcf5d --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/hx.aichat-prompt @@ -0,0 +1,9 @@ +#!/usr/bin/env zsh + +declare -xr INSTRUCTIONS='Answer only. If it is code, code only without code-block at the beginning and the end.' + +if [[ $# -eq 0 ]]; then + aichat "$(hx.prompt). $INSTRUCTIONS" +else + aichat "$@. $INSTRUCTIONS" +fi diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.chatgpt-prompt b/gemfeed/examples/conf/dotfiles/scripts/hx.chatgpt-prompt new file mode 100755 index 00000000..e4b6047f --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/hx.chatgpt-prompt @@ -0,0 +1,3 @@ +#!/usr/bin/env zsh + +chatgpt "$(hx.prompt). Answer only. If it is code, code only without code-block at the beginning and the end." diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.goformatter b/gemfeed/examples/conf/dotfiles/scripts/hx.goformatter new file mode 100755 index 00000000..028fbb25 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/hx.goformatter @@ -0,0 +1,3 @@ +#!/bin/sh + +goimports | gofumpt diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.hexai-prompt b/gemfeed/examples/conf/dotfiles/scripts/hx.hexai-prompt new file mode 100755 index 00000000..ef413c0a --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/hx.hexai-prompt @@ -0,0 +1,9 @@ +#!/usr/bin/env zsh + +declare -xr INSTRUCTIONS='Answer only. If it is code, code only without code-block at the beginning and the end.' + +if [[ $# -eq 0 ]]; then + hexai "$(hx.prompt). $INSTRUCTIONS" 2>/dev/null +else + hexai "$@. $INSTRUCTIONS" 2>/dev/null +fi diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.nvim-copilot-prompt b/gemfeed/examples/conf/dotfiles/scripts/hx.nvim-copilot-prompt new file mode 100755 index 00000000..dcb28376 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/hx.nvim-copilot-prompt @@ -0,0 +1,32 @@ +#!/usr/bin/env zsh + +declare -r STDIN_FILE=~/.copilot_prompt_stdin.txt +declare -r INPUT_FILE=~/.copilot_chat_input.txt +declare -r OUTPUT_FILE=~/.copilot_chat_output.txt +declare INPUT_PROMPT + +if [ -f $OUTPUT_FILE.done ]; then + rm $OUTPUT_FILE.done +fi +cat > $STDIN_FILE &>/dev/null + +if [ $# -eq 0 ]; then + INPUT_PROMPT="$(hx.prompt)" +else + INPUT_PROMPT="$@" +fi + +cat <<INPUT_FILE > $INPUT_FILE +$INPUT_PROMPT for the following: + +$(cat $STDIN_FILE) + +If the result is code, print out the code only, don't print the \`\`\`-markers around the code block. +INPUT_FILE + +tmux split-window -v "nvim +':CopilotAsk'; mv $OUTPUT_FILE $OUTPUT_FILE.done" + +while [ ! -f "$OUTPUT_FILE.done" ]; do + sleep 0.2 +done +sed -n '/^## Copilot/,/^## User/ { /^## Copilot/d; /\[file:/d; /^## User/d; p; }' $OUTPUT_FILE.done diff --git a/gemfeed/examples/conf/dotfiles/scripts/hx.prompt b/gemfeed/examples/conf/dotfiles/scripts/hx.prompt new file mode 100755 index 00000000..8dd14dd3 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/hx.prompt @@ -0,0 +1,14 @@ +#!/usr/bin/env zsh + +declare -r REPLY_FILE=~/.hx-prompt-reply +if [ -f "$REPLY_FILE" ]; then + rm "$REPLY_FILE" +fi + +tmux split-window -v "touch $REPLY_FILE.tmp; hx $REPLY_FILE.tmp; mv $REPLY_FILE.tmp $REPLY_FILE" + +while [ ! -f "$REPLY_FILE" ]; do + sleep 0.2 +done + +cat "$REPLY_FILE" diff --git a/gemfeed/examples/conf/dotfiles/scripts/randomnote.rb b/gemfeed/examples/conf/dotfiles/scripts/randomnote.rb new file mode 100644 index 00000000..b0c1b490 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/randomnote.rb @@ -0,0 +1,30 @@ +#!/usr/bin/env ruby + +NOTES_DIR = "#{ENV['HOME']}/git/foo.zone-content/gemtext/notes" +BOOK_PATH = "#{ENV['HOME']}/Buecher/Diverse/Search-Inside-Yourself.txt" +MIN_PERCENTAGE = 80 +MIN_LENGTH = 10 + +class String + CLEAN_PATTERN = [ + /\d\d\d-\d\d-\d\d/, /[^A-Za-z0-9!.;,?'" @]/, + /http.?:\/\/\S+/, /\S+\.gmi/, /^\./, /^\d/, + ] + def clean + CLEAN_PATTERN.each {|p| gsub! p, '' } + gsub(/\s+/, ' ').strip + end + def letter_percentage?(threshold) = threshold <= (100 * count("A-Za-z")) / length +end + +begin + srand Random.new_seed + puts File.read((Dir["#{NOTES_DIR}/*.gmi"] + [BOOK_PATH]).shuffle.sample) + .split("\n") + .map(&:clean) + .select{ |l| l.length >= MIN_LENGTH } + .reject{ |l| l.match?(/(Published at|EMail your comments)/) } + .reject{ |l| l.match?(/'|book notes/) } + .select{ |l| l.letter_percentage?(MIN_PERCENTAGE) } + .shuffle.sample +end diff --git a/gemfeed/examples/conf/dotfiles/scripts/taskwarriorfeeder.rb b/gemfeed/examples/conf/dotfiles/scripts/taskwarriorfeeder.rb new file mode 100644 index 00000000..8e3096ea --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/scripts/taskwarriorfeeder.rb @@ -0,0 +1,221 @@ +#!/usr/bin/env ruby + +require 'optparse' +require 'digest' +require 'json' +require 'set' + +PERSONAL_TIMESPAN_D = 30 +WORK_TIMESPAN_D = 14 +WORKTIME_DIR = "#{ENV['HOME']}/git/worktime".freeze +GOS_DIR = "#{ENV['HOME']}/.gosdir".freeze +MAX_PENDING_RANDOM_TASKS = 11 + +def maybe? + [true, false].sample +end + +def run_from_personal_device? + `uname`.chomp == 'Linux' +end + +def random_count + MAX_PENDING_RANDOM_TASKS - `task status:pending +random count`.to_i +end + +def notes(notes_dirs, prefix, dry) + notes_dirs.each do |notes_dir| + Dir["#{notes_dir}/#{prefix}-*"].each do |notes_file| + match = File.read(notes_file).strip.match(/(?<due>\d+)? *(?<tag>[A-Z]?[a-z,-:]+) *(?<body>.*)/m) + next unless match + + tags = match[:tag].split(',') + [prefix] + due = if match[:due].nil? + tags.include?('track') ? '1year' : "#{rand(0..PERSONAL_TIMESPAN_D)}d" + else + "#{match[:due]}d" + end + yield tags, match[:body], due + File.delete(notes_file) unless dry + end + end +end + +def random_quote(md_file) + tag = File.basename(md_file, '.md').downcase + lines = File.readlines(md_file) + + match = lines.first.match(/\((\d+)\)/) + timespan = run_from_personal_device? ? PERSONAL_TIMESPAN_D : WORK_TIMESPAN_D + timespan = match ? match[1].to_i : timespan + + quote = lines.select { |l| l.start_with? '*' }.map { |l| l.sub(/\* +/, '') }.sample + tags = [tag, 'random'] + tags << 'work' if maybe? and maybe? + yield tags, quote.chomp, "#{rand(0..timespan)}d" +end + +def run!(cmd, dry) + puts cmd + return if dry + + puts `#{cmd}` + raise "Command '#{cmd}' failed with #{$?.exitstatus}" if $?.exitstatus != 0 +rescue StandardError => e + puts "Error running command '#{cmd}': #{e.message}" + exit 1 +end + +def skill_add!(skills_str, dry) + skills_file = "#{WORKTIME_DIR}/skills.txt" + skills_str.split(',').map(&:strip).each { skills[_1.to_s.downcase] = _1 } + + File.foreach(skills_file) do |line| + line.chomp! + skills[line.downcase] = line + end + File.open("#{skills_file}.tmp", 'w') do |file| + skills.each_value { |skill| file.puts(skill) } + end + return if dry + + File.rename("#{skills_file}.tmp", skills_file) +end + +def worklog_add!(tag, quote, due, dry) + file = "#{WORKTIME_DIR}/wl-#{Time.now.to_i}n.txt" + content = "#{due.chomp 'd'} #{tag} #{quote}" + + puts "#{file}: #{content}" + File.write(file, content) unless dry +end + +# Queue to Gos https://codeberg.org/snonux/gos +def gos_queue!(tags, message, dry) + tags.delete('share') + platforms = [] + %w[linkedin li mastodon ma noop no].select { tags.include?(_1) }.each do |platform| + platforms << platform + tags.delete(platform) + end + unless platforms.empty? + platforms = %w[share] + platforms + tags = ["#{platforms.join(':')}"] + tags + end + tags = %w[share] + tags if tags.size == 1 && !tags.first.start_with?('share') + tags_str = tags.join(',') + + message = "#{tags_str.empty? ? '' : "#{tags_str} "}#{message}" + file = "#{GOS_DIR}/#{Digest::MD5.hexdigest(message)}.txt" + puts "Writing #{file} with #{message}" + File.write(file, message) unless dry +end + +def task_add!(tags, quote, due, dry) + if quote.empty? + puts 'Not adding task with empty quote' + return + end + if tags.include?('tr') + tags << 'track' + tags.delete('tr') + end + tags << 'work' if tags.include?('mentoring') || tags.include?('productivity') + tags.uniq! + + if tags.include?('task') + run! "task #{quote}", dry + else + project = tags.find { |t| t =~ /^[A-Z]/ } + project = if project.nil? + '' + else + tags.delete(project) + " project:#{project.downcase}" + end + priority = tags.include?('high') ? 'H' : '' + run! "task add due:#{due} priority:#{priority}#{project} +#{tags.join(' +')} '#{quote.gsub("'", '"')}'", dry + end +end + +def task_schedule!(id, due, dry) + run! "timeout 5s task modify #{id} due:#{due}", dry +end + +# Randomly schedule all unscheduled tasks but the ones with the +unsched tag +def unscheduled_tasks + lines = `task -lowhigh -unsched -nosched -notes -note -meeting -track due: 2>/dev/null`.split("\n").drop(1) + lines.pop + lines.map { |foo| foo.split.first }.each do |id| + yield id if id.to_i.positive? + end +end + +begin + opts = { + quotes_dir: "#{ENV['HOME']}/Notes/HabitsAndQuotes", + notes_dirs: "#{ENV['HOME']}/Notes,#{ENV['HOME']}/Notes/Quicklogger,#{ENV['HOME']}/git/worktime", + dry_run: false, + no_random: false + } + + opt_parser = OptionParser.new do |o| + o.banner = 'Usage: ruby taskwarriorfeeder.rb [options]' + o.on('-d', '--quotes-dir DIR', 'The quotes directory') { |v| opts[:quotes_dir] = v } + o.on('-n', '--notes-dirs DIR1,DIR2,...', 'The notes directories') { |v| opts[:notes_dirs] = v } + o.on('-D', '--dry-run', 'Dry run mode') { opts[:dry_run] = true } + o.on('-R', '--no-randoms', 'No random entries') { opts[:no_random] = true } + o.on_tail('-h', '--help', 'Show this help message and exit') { puts o and exit } + end + + opt_parser.parse!(ARGV) + core_habits_md_file = "#{opts[:quotes_dir]}/CoreHabits.md" + + (run_from_personal_device? ? %w[ql pl] : %w[wl]).each do |prefix| + notes(opts[:notes_dirs].split(','), prefix, opts[:dry_run]) do |tags, note, due| + if tags.include?('skill') || tags.include?('skills') + skill_add!(note, opts[:dry_run]) + elsif tags.include? 'work' + worklog_add!(:log, note, due, opts[:dry_run]) + elsif tags.any? { |tag| tag.start_with?('share') } + gos_queue!(tags, note, opts[:dry_run]) + else + task_add!(tags, note, due, opts[:dry_run]) + end + end + end + + unless opts[:no_random] + if File.exist?(core_habits_md_file) + random_quote(core_habits_md_file) do |tags, quote, due| + task_add!(tags, quote, due, opts[:dry_run]) + end + end + count = random_count + + Dir["#{opts[:quotes_dir]}/*.md"].shuffle.each do |md_file| + next unless maybe? + break if count <= 0 + + random_quote(md_file) do |tags, quote, due| + task_add!(tags, quote, due, opts[:dry_run]) + count -= 1 + end + end + end + + if Dir.exist?(GOS_DIR) && !opts[:dry_run] + Dir["#{WORKTIME_DIR}/tw-gos-*.json"].each do |tw_gos| + JSON.parse(File.read(tw_gos)).each do |entry| + gos_queue!(entry['tags'], entry['description'], opts[:dry_run]) + end + File.delete(tw_gos) + rescue StandardError => e + puts e + end + end + + unscheduled_tasks do |id| + task_schedule!(id, "#{rand(0..PERSONAL_TIMESPAN_D)}d", opts[:dry_run]) + end +end diff --git a/gemfeed/examples/conf/dotfiles/signature b/gemfeed/examples/conf/dotfiles/signature new file mode 100644 index 00000000..8031719e --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/signature @@ -0,0 +1,2 @@ +Paul Buetow +paul.buetow.org diff --git a/gemfeed/examples/conf/dotfiles/ssh/config b/gemfeed/examples/conf/dotfiles/ssh/config new file mode 100644 index 00000000..5b4b250e --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/ssh/config @@ -0,0 +1,21 @@ +ControlPath ~/.ssh/cp-%C +ControlMaster auto +#UseKeychain yes +AddKeysToAgent yes +ControlPersist 60m +#StrictHostKeyChecking no + +Host blowfish.buetow.org +User rex +Port 2 + +Host fishfinger.buetow.org +User rex +Port 2 + +Host *.aws.buetow.org +User ec2-user +Port 22 + +Host *.buetow.org +Port 2 diff --git a/gemfeed/examples/conf/dotfiles/sway/config.d/keyboard.conf b/gemfeed/examples/conf/dotfiles/sway/config.d/keyboard.conf new file mode 100644 index 00000000..6b10a788 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/sway/config.d/keyboard.conf @@ -0,0 +1,6 @@ +input "type:keyboard" { + xkb_layout us,gb,de + xkb_options grp:win_space_toggle +} + +input * xkb_options "caps:escape" diff --git a/gemfeed/examples/conf/dotfiles/tmux/tmux.conf b/gemfeed/examples/conf/dotfiles/tmux/tmux.conf new file mode 100644 index 00000000..42c53866 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/tmux/tmux.conf @@ -0,0 +1,32 @@ +source ~/.config/tmux/tmux.local.conf + +set-option -g allow-rename off +set-option -g history-limit 100000 +set-option -s escape-time 0 +set-option -g set-titles on + +set-window-option -g mode-keys vi + +bind-key h select-pane -L +bind-key j select-pane -D +bind-key k select-pane -U +bind-key l select-pane -R + +bind-key H resize-pane -L 5 +bind-key J resize-pane -D 5 +bind-key K resize-pane -U 5 +bind-key L resize-pane -R 5 + +bind-key b break-pane -d +bind-key c new-window -c '#{pane_current_path}' +bind-key F new-window -n "session-switcher" "tmux list-sessions | fzf | cut -d: -f1 | xargs tmux switch-client -t" +bind-key p setw synchronize-panes off +bind-key P setw synchronize-panes on +bind-key r source-file ~/.tmux.conf \; display-message "~/.tmux.conf reloaded" +bind-key T choose-tree + +set-option -g pane-active-border-style fg=magenta,bold + +set -g status-right '#{@hexai_status} #[fg=colour8]| %H:%M' +set -g status-right-length 120 +set-environment -g HEXAI_TMUX_STATUS_THEME white-on-purple diff --git a/gemfeed/examples/conf/dotfiles/tmux/tmux.local.conf b/gemfeed/examples/conf/dotfiles/tmux/tmux.local.conf new file mode 100644 index 00000000..adb6294b --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/tmux/tmux.local.conf @@ -0,0 +1,2 @@ +bind-key -T copy-mode-vi 'v' send -X begin-selection +bind-key -T copy-mode-vi 'y' send -X copy-selection-and-cancel diff --git a/gemfeed/examples/conf/dotfiles/vale.ini b/gemfeed/examples/conf/dotfiles/vale.ini new file mode 100644 index 00000000..3b396788 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/vale.ini @@ -0,0 +1,6 @@ +StylesPath = styles +MinAlertLevel = suggestion +Packages = Microsoft, proselint + +[*] +BasedOnStyles = Vale, Microsoft, proselint diff --git a/gemfeed/examples/conf/dotfiles/waybar/config.jsonc b/gemfeed/examples/conf/dotfiles/waybar/config.jsonc new file mode 100644 index 00000000..db2aeea6 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/waybar/config.jsonc @@ -0,0 +1,194 @@ +// -*- mode: jsonc -*- +{ + // "layer": "top", // Waybar at top layer + // "position": "bottom", // Waybar position (top|bottom|left|right) + "height": 20, // Waybar height (to be removed for auto height) + // "width": 1280, // Waybar width + "spacing": 1, // Gaps between modules (4px) + // Choose the order of the modules + "modules-left": [ + "sway/workspaces", + "sway/mode", + "sway/scratchpad" + ], + "modules-center": [ + ], + "modules-right": [ + "idle_inhibitor", + "pulseaudio", + "network", + "power-profiles-daemon", + "temperature", + "sway/language", + "battery", + "clock", + "tray" + ], + // Modules configuration + // "sway/workspaces": { + // "disable-scroll": true, + // "all-outputs": true, + // "warp-on-scroll": false, + // "format": "{name}: {icon}", + // "format-icons": { + // "1": "", + // "2": "", + // "3": "", + // "4": "", + // "5": "", + // "urgent": "", + // "focused": "", + // "default": "" + // } + // }, + "keyboard-state": { + "numlock": true, + "capslock": true, + "format": "{name} {icon}", + "format-icons": { + "locked": "", + "unlocked": "" + } + }, + "sway/mode": { + "format": "<span style=\"italic\">{}</span>" + }, + "sway/scratchpad": { + "format": "{icon} {count}", + "show-empty": false, + "format-icons": ["", ""], + "tooltip": true, + "tooltip-format": "{app}: {title}" + }, + "mpd": { + "format": "{stateIcon} {consumeIcon}{randomIcon}{repeatIcon}{singleIcon}{artist} - {album} - {title} ({elapsedTime:%M:%S}/{totalTime:%M:%S}) ⸨{songPosition}|{queueLength}⸩ {volume}% ", + "format-disconnected": "Disconnected ", + "format-stopped": "{consumeIcon}{randomIcon}{repeatIcon}{singleIcon}Stopped ", + "unknown-tag": "N/A", + "interval": 5, + "consume-icons": { + "on": " " + }, + "random-icons": { + "off": "<span color=\"#f53c3c\"></span> ", + "on": " " + }, + "repeat-icons": { + "on": " " + }, + "single-icons": { + "on": "1 " + }, + "state-icons": { + "paused": "", + "playing": "" + }, + "tooltip-format": "MPD (connected)", + "tooltip-format-disconnected": "MPD (disconnected)" + }, + "idle_inhibitor": { + "format": "{icon}", + "format-icons": { + "activated": "", + "deactivated": "" + } + }, + "tray": { + // "icon-size": 21, + "spacing": 10 + }, + "clock": { + // "timezone": "America/New_York", + "tooltip-format": "<big>{:%Y %B}</big>\n<tt><small>{calendar}</small></tt>", + "format-alt": "{:%Y-%m-%d}" + }, + "cpu": { + "format": "{usage}% ", + "tooltip": false + }, + "memory": { + "format": "{}% " + }, + "temperature": { + // "thermal-zone": 2, + // "hwmon-path": "/sys/class/hwmon/hwmon2/temp1_input", + "critical-threshold": 80, + // "format-critical": "{temperatureC}°C {icon}", + "format": "{temperatureC}°C {icon}", + "format-icons": ["", "", ""] + }, + "backlight": { + // "device": "acpi_video1", + "format": "{percent}% {icon}", + "format-icons": ["🌑", "🌘", "🌗", "🌖", "🌕"] + }, + "battery": { + "states": { + // "good": 95, + "warning": 30, + "critical": 15 + }, + "format": "{capacity}% {icon}", + "format-full": "{capacity}% {icon}", + "format-charging": "{capacity}% ", + "format-plugged": "{capacity}% ", + "format-alt": "{time} {icon}", + // "format-good": "", // An empty format will hide the module + // "format-full": "", + "format-icons": ["", "", "", "", ""] + }, + "battery#bat2": { + "bat": "BAT2" + }, + "power-profiles-daemon": { + "format": "{icon}", + "tooltip-format": "Power profile: {profile}\nDriver: {driver}", + "tooltip": true, + "format-icons": { + "default": "", + "performance": "", + "balanced": "", + "power-saver": "" + } + }, + "network": { + // "interface": "wlp2*", // (Optional) To force the use of this interface + "format-wifi": "{essid} ({signalStrength}%) ", + "format-ethernet": "{ipaddr}/{cidr} ", + "tooltip-format": "{ifname} via {gwaddr} ", + "format-linked": "{ifname} (No IP) ", + "format-disconnected": "Disconnected ⚠", + "format-alt": "{ifname}: {ipaddr}/{cidr}" + }, + "pulseaudio": { + // "scroll-step": 1, // %, can be a float + "format": "{volume}% {icon} {format_source}", + "format-bluetooth": "{volume}% {icon} {format_source}", + "format-bluetooth-muted": " {icon} {format_source}", + "format-muted": " {format_source}", + "format-source": "{volume}% ", + "format-source-muted": "", + "format-icons": { + "headphone": "", + "hands-free": "", + "headset": "", + "phone": "", + "portable": "", + "car": "", + "default": ["", "", ""] + }, + "on-click": "pavucontrol" + }, + "custom/media": { + "format": "{icon} {}", + "return-type": "json", + "max-length": 40, + "format-icons": { + "spotify": "", + "default": "🎜" + }, + "escape": true, + "exec": "$HOME/.config/waybar/mediaplayer.py 2> /dev/null" // Script in resources folder + // "exec": "$HOME/.config/waybar/mediaplayer.py --player spotify 2> /dev/null" // Filter player based on name + } +} diff --git a/gemfeed/examples/conf/dotfiles/waybar/style.css b/gemfeed/examples/conf/dotfiles/waybar/style.css new file mode 100644 index 00000000..e0310372 --- /dev/null +++ b/gemfeed/examples/conf/dotfiles/waybar/style.css @@ -0,0 +1,326 @@ +* { + font-family: 'Noto Sans Mono', 'Font Awesome 6 Free', 'Font Awesome 6 Brands', monospace; + font-size: 13px; +} + +window#waybar { + background-color: rgba(43, 48, 59, 0.5); + border-bottom: 3px solid rgba(100, 114, 125, 0.5); + color: #ffffff; + transition-property: background-color; + transition-duration: .5s; +} + +window#waybar.hidden { + opacity: 0.2; +} + +/* +window#waybar.empty { + background-color: transparent; +} +window#waybar.solo { + background-color: #FFFFFF; +} +*/ + +window#waybar.termite { + background-color: #3F3F3F; +} + +window#waybar.chromium { + background-color: #000000; + border: none; +} + +button { + /* Use box-shadow instead of border so the text isn't offset */ + box-shadow: inset 0 -3px transparent; + /* Avoid rounded borders under each button name */ + border: none; + border-radius: 0; +} + +/* https://github.com/Alexays/Waybar/wiki/FAQ#the-workspace-buttons-have-a-strange-hover-effect */ +button:hover { + background: inherit; + box-shadow: inset 0 -3px #ffffff; +} + +/* you can set a style on hover for any module like this */ +#pulseaudio:hover { + background-color: #a37800; +} + +#workspaces button { + padding: 0 5px; + background-color: transparent; + color: #ffffff; +} + +#workspaces button:hover { + background: rgba(0, 0, 0, 0.2); +} + +#workspaces button.focused { + background-color: #64727D; + box-shadow: inset 0 -3px #ffffff; +} + +#workspaces button.urgent { + background-color: #eb4d4b; +} + +#mode { + background-color: #64727D; + box-shadow: inset 0 -3px #ffffff; +} + +#clock, +#battery, +#cpu, +#memory, +#disk, +#temperature, +#backlight, +#network, +#pulseaudio, +#wireplumber, +#custom-media, +#tray, +#mode, +#idle_inhibitor, +#scratchpad, +#power-profiles-daemon, +#mpd { + padding: 0 10px; + color: #ffffff; +} + +#window, +#workspaces { + margin: 0 4px; +} + +/* If workspaces is the leftmost module, omit left margin */ +.modules-left > widget:first-child > #workspaces { + margin-left: 0; +} + +/* If workspaces is the rightmost module, omit right margin */ +.modules-right > widget:last-child > #workspaces { + margin-right: 0; +} + +#clock { + background-color: #64727D; +} + +#battery { + background-color: #ffffff; + color: #000000; +} + +#battery.charging, #battery.plugged { + color: #ffffff; + background-color: #26A65B; +} + +@keyframes blink { + to { + background-color: #ffffff; + color: #000000; + } +} + +/* Using steps() instead of linear as a timing function to limit cpu usage */ +#battery.critical:not(.charging) { + background-color: #f53c3c; + color: #ffffff; + animation-name: blink; + animation-duration: 0.5s; + animation-timing-function: steps(12); + animation-iteration-count: infinite; + animation-direction: alternate; +} + +#power-profiles-daemon { + padding-right: 15px; +} + +#power-profiles-daemon.performance { + background-color: #f53c3c; + color: #ffffff; +} + +#power-profiles-daemon.balanced { + background-color: #2980b9; + color: #ffffff; +} + +#power-profiles-daemon.power-saver { + background-color: #2ecc71; + color: #000000; +} + +label:focus { + background-color: #000000; +} + +#cpu { + background-color: #2ecc71; + color: #000000; +} + +#memory { + background-color: #9b59b6; +} + +#disk { + background-color: #964B00; +} + +#backlight { + background-color: #90b1b1; +} + +#network { + background-color: #2980b9; +} + +#network.disconnected { + background-color: #f53c3c; +} + +#pulseaudio { + background-color: #f1c40f; + color: #000000; +} + +#pulseaudio.muted { + background-color: #90b1b1; + color: #2a5c45; +} + +#wireplumber { + background-color: #fff0f5; + color: #000000; +} + +#wireplumber.muted { + background-color: #f53c3c; +} + +#custom-media { + background-color: #66cc99; + color: #2a5c45; + min-width: 100px; +} + +#custom-media.custom-spotify { + background-color: #66cc99; +} + +#custom-media.custom-vlc { + background-color: #ffa000; +} + +#temperature { + background-color: #f0932b; +} + +#temperature.critical { + background-color: #eb4d4b; +} + +#tray { + background-color: #2980b9; +} + +#tray > .passive { + -gtk-icon-effect: dim; +} + +#tray > .needs-attention { + -gtk-icon-effect: highlight; + background-color: #eb4d4b; +} + +#idle_inhibitor { + background-color: #2d3436; +} + +#idle_inhibitor.activated { + background-color: #ecf0f1; + color: #2d3436; +} + +#mpd { + background-color: #66cc99; + color: #2a5c45; +} + +#mpd.disconnected { + background-color: #f53c3c; +} + +#mpd.stopped { + background-color: #90b1b1; +} + +#mpd.paused { + background-color: #51a37a; +} + +#language { + background: #00b093; + color: #740864; + padding: 0 5px; + margin: 0 5px; + min-width: 16px; +} + +#keyboard-state { + background: #97e1ad; + color: #000000; + padding: 0 0px; + margin: 0 5px; + min-width: 16px; +} + +#keyboard-state > label { + padding: 0 5px; +} + +#keyboard-state > label.locked { + background: rgba(0, 0, 0, 0.2); +} + +#scratchpad { + background: rgba(0, 0, 0, 0.2); +} + +#scratchpad.empty { + background-color: transparent; +} + +#privacy { + padding: 0; +} + +#privacy-item { + padding: 0 5px; + color: white; +} + +#privacy-item.screenshare { + background-color: #cf5700; +} + +#privacy-item.audio-in { + background-color: #1ca000; +} + +#privacy-item.audio-out { + background-color: #0069d4; +} diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/Justfile b/gemfeed/examples/conf/f3s/anki-sync-server/Justfile new file mode 100644 index 00000000..73d679c7 --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "services" +RELEASE_NAME := "anki-sync-server" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/README.md b/gemfeed/examples/conf/f3s/anki-sync-server/README.md new file mode 100644 index 00000000..e3aee076 --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/README.md @@ -0,0 +1,34 @@ + +# Anki Sync Server Kubernetes Deployment + +This directory contains the Kubernetes configuration for deploying the Anki Sync Server. + +## Deployment + +To deploy the Anki Sync Server, apply the Kubernetes manifests in this directory: + +```bash +make apply +``` + +## Secret Management + +The deployment uses a Kubernetes secret to manage the `SYNC_USER1` environment variable. This secret is not included in the repository for security reasons. You must create it manually in the `services` namespace. + +### Creating the Secret + +To create the secret, use the following `kubectl` command: + +```bash +kubectl create secret generic anki-sync-server-secret --from-literal=SYNC_USER1='paul:SECRETPASSWORD' -n services +``` + +Replace `paul:SECRETPASSWORD` with your desired username and password. + +### Updating the Secret + +To update the secret, you can delete and recreate it, or use `kubectl edit`: + +```bash +kubectl edit secret anki-sync-server-secret -n services +``` diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Dockerfile b/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Dockerfile new file mode 100644 index 00000000..81fad856 --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Dockerfile @@ -0,0 +1,39 @@ +FROM rust:1.85.0-alpine3.20 AS builder + +ARG ANKI_VERSION + +RUN apk update && apk add --no-cache build-base protobuf && rm -rf /var/cache/apk/* + +RUN cargo install --git https://github.com/ankitects/anki.git \ +--tag ${ANKI_VERSION} \ +--root /anki-server \ +--locked \ +anki-sync-server + +FROM alpine:3.21.0 + +# Default PUID and PGID values (can be overridden at runtime). Use these to +# ensure the files on the volume have the permissions you need. +ENV PUID=1000 +ENV PGID=1000 + +COPY --from=builder /anki-server/bin/anki-sync-server /usr/local/bin/anki-sync-server + +RUN apk update && apk add --no-cache bash su-exec && rm -rf /var/cache/apk/* + +EXPOSE 8080 + +COPY entrypoint.sh /entrypoint.sh +RUN chmod +x /entrypoint.sh + +ENTRYPOINT ["/entrypoint.sh"] +CMD ["anki-sync-server"] + +# This health check will work for Anki versions 24.08.x and newer. +# For older versions, it may incorrectly report an unhealthy status, which should not be the case. +HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ + CMD wget -qO- http://127.0.0.1:8080/health || exit 1 + +VOLUME /anki_data + +LABEL maintainer="Jean Khawand <jk@jeankhawand.com>" diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Justfile b/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Justfile new file mode 100644 index 00000000..5da854f3 --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/Justfile @@ -0,0 +1,6 @@ +all: + docker build -t anki-sync-server:25.07.5b --build-arg ANKI_VERSION=25.07.5 . +f3s: + docker build -t anki-sync-server:25.07.5b --build-arg ANKI_VERSION=25.07.5 . + docker tag anki-sync-server:25.07.5b r0.lan.buetow.org:30001/anki-sync-server:25.07.5b + docker push r0.lan.buetow.org:30001/anki-sync-server:25.07.5b diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/entrypoint.sh b/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/entrypoint.sh new file mode 100644 index 00000000..9a72cca3 --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/docker-image/entrypoint.sh @@ -0,0 +1,31 @@ +#!/bin/sh +set -o errexit +set -o nounset +set -o pipefail + +# Default PUID and PGID if not provided +export PUID=${PUID:-1000} +export PGID=${PGID:-1000} + +# These values are fixed and cannot be overwritten from the outside for +# convenience and safety reasons +export SYNC_PORT=8080 +export SYNC_BASE=/anki_data + +# Check if group exists, create if not +if ! getent group anki-group > /dev/null 2>&1; then + addgroup -g "$PGID" anki-group +fi + +# Check if user exists, create if not +if ! id -u anki > /dev/null 2>&1; then + adduser -D -H -u "$PUID" -G anki-group anki +fi + +# Fix ownership of mounted volumes +mkdir -p /anki_data +#chown anki:anki-group /anki_data + +# Run the provided command as the `anki` user +exec su-exec anki "$@" + diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/Chart.yaml new file mode 100644 index 00000000..632f09ae --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/Chart.yaml @@ -0,0 +1,5 @@ +apiVersion: v2 +name: anki-sync-server +description: A Helm chart for deploying the Anki Sync Server. +version: 0.1.0 +appVersion: "25.07.5b" diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/README.md b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/README.md new file mode 100644 index 00000000..1b485be9 --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/README.md @@ -0,0 +1,11 @@ +# Anki Sync Server Helm Chart + +This chart deploys the Anki Sync Server. + +## Installing the Chart + +To install the chart with the release name `my-release`, run the following command: + +```bash +helm install anki-sync-server . --namespace services --create-namespace +``` diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/deployment.yaml new file mode 100644 index 00000000..181b6c97 --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/deployment.yaml @@ -0,0 +1,35 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: anki-sync-server + namespace: services +spec: + replicas: 1 + selector: + matchLabels: + app: anki-sync-server + template: + metadata: + labels: + app: anki-sync-server + spec: + containers: + - name: anki-sync-server + image: registry.lan.buetow.org:30001/anki-sync-server:25.07.5b + ports: + - containerPort: 8080 + env: + - name: SYNC_PORT + value: "8080" + - name: SYNC_USER1 + valueFrom: + secretKeyRef: + name: anki-sync-server-secret + key: SYNC_USER1 + volumeMounts: + - name: anki-data + mountPath: /anki_data + volumes: + - name: anki-data + persistentVolumeClaim: + claimName: anki-data-pvc diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/ingress.yaml new file mode 100644 index 00000000..010c5884 --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/ingress.yaml @@ -0,0 +1,20 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: anki-sync-server-ingress + namespace: services + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: anki.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: anki-sync-server-service + port: + number: 8080 diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/persistent-volume.yaml b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/persistent-volume.yaml new file mode 100644 index 00000000..da715ea2 --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/persistent-volume.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: anki-data-pv +spec: + capacity: + storage: 10Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/anki-sync-server/anki_data + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: anki-data-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi diff --git a/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/service.yaml b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/service.yaml new file mode 100644 index 00000000..a8eb183e --- /dev/null +++ b/gemfeed/examples/conf/f3s/anki-sync-server/helm-chart/templates/service.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + app: anki-sync-server + name: anki-sync-server-service + namespace: services +spec: + ports: + - name: web + port: 8080 + protocol: TCP + targetPort: 8080 + selector: + app: anki-sync-server diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/Justfile b/gemfeed/examples/conf/f3s/audiobookshelf/Justfile new file mode 100644 index 00000000..bc020beb --- /dev/null +++ b/gemfeed/examples/conf/f3s/audiobookshelf/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "services" +RELEASE_NAME := "audiobookshelf" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/Chart.yaml new file mode 100644 index 00000000..dbd55e07 --- /dev/null +++ b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/Chart.yaml @@ -0,0 +1,5 @@ +apiVersion: v2 +name: audiobookshelf +description: A Helm chart for deploying Audiobookshelf. +version: 0.1.0 +appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/README.md b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/README.md new file mode 100644 index 00000000..670efa09 --- /dev/null +++ b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/README.md @@ -0,0 +1,19 @@ +# Audiobookshelf Helm Chart + +This chart deploys Audiobookshelf. + +## Prerequisites + +Before installing the chart, you must manually create the following directories on your host system to be used by the persistent volumes: + +- `/data/nfs/k3svolumes/audiobookshelf/config` +- `/data/nfs/k3svolumes/audiobookshelf/audiobooks` +- `/data/nfs/k3svolumes/audiobookshelf/podcasts` + +## Installing the Chart + +To install the chart with the release name `my-release`, run the following command: + +```bash +helm install audiobookshelf . --namespace services --create-namespace +``` diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/deployment.yaml new file mode 100644 index 00000000..65e536ab --- /dev/null +++ b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/deployment.yaml @@ -0,0 +1,53 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: audiobookshelf + namespace: services +spec: + replicas: 1 + selector: + matchLabels: + app: audiobookshelf + template: + metadata: + labels: + app: audiobookshelf + spec: + containers: + - name: audiobookshelf + image: ghcr.io/advplyr/audiobookshelf + ports: + - containerPort: 80 + volumeMounts: + - name: audiobookshelf-config + mountPath: /config + - name: audiobookshelf-audiobooks + mountPath: /audiobooks + - name: audiobookshelf-podcasts + mountPath: /podcasts + volumes: + - name: audiobookshelf-config + persistentVolumeClaim: + claimName: audiobookshelf-config-pvc + - name: audiobookshelf-audiobooks + persistentVolumeClaim: + claimName: audiobookshelf-audiobooks-pvc + - name: audiobookshelf-podcasts + persistentVolumeClaim: + claimName: audiobookshelf-podcasts-pvc +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: audiobookshelf + name: audiobookshelf-service + namespace: services +spec: + ports: + - name: web + port: 80 + protocol: TCP + targetPort: 80 + selector: + app: audiobookshelf diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/ingress.yaml new file mode 100644 index 00000000..6e4f7ac7 --- /dev/null +++ b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/ingress.yaml @@ -0,0 +1,20 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: audiobookshelf-ingress + namespace: services + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: audiobookshelf.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: audiobookshelf-service + port: + number: 80 diff --git a/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/persistent-volumes.yaml new file mode 100644 index 00000000..8691d141 --- /dev/null +++ b/gemfeed/examples/conf/f3s/audiobookshelf/helm-chart/templates/persistent-volumes.yaml @@ -0,0 +1,83 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: audiobookshelf-config-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/audiobookshelf/config + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: audiobookshelf-config-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: audiobookshelf-audiobooks-pv +spec: + capacity: + storage: 300Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/audiobookshelf/audiobooks + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: audiobookshelf-audiobooks-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 300Gi +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: audiobookshelf-podcasts-pv +spec: + capacity: + storage: 50Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/audiobookshelf/podcasts + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: audiobookshelf-podcasts-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Gi diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/Justfile b/gemfeed/examples/conf/f3s/example-apache-volume-claim/Justfile new file mode 100644 index 00000000..e8003e8b --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache-volume-claim/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "test" +RELEASE_NAME := "example-apache-volume-claim" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/Chart.yaml new file mode 100644 index 00000000..78d53976 --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/Chart.yaml @@ -0,0 +1,5 @@ +apiVersion: v2 +name: apache-volume-claim +description: A Helm chart for deploying Apache with a persistent volume claim. +version: 0.1.0 +appVersion: "1.0" diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/README.md b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/README.md new file mode 100644 index 00000000..23d14cde --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/README.md @@ -0,0 +1,11 @@ +# Apache Helm Chart with Persistent Volume + +This chart deploys a simple Apache web server with a persistent volume claim. + +## Installing the Chart + +To install the chart with the release name `my-release`, run the following command: + +```bash +helm install example-apache-volume-claim . --namespace test --create-namespace +```
\ No newline at end of file diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-deployment.yaml b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-deployment.yaml new file mode 100644 index 00000000..78706a34 --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-deployment.yaml @@ -0,0 +1,41 @@ +# Apache HTTP Server Deployment +apiVersion: apps/v1 +kind: Deployment +metadata: + name: apache-deployment + namespace: test +spec: + replicas: 2 + selector: + matchLabels: + app: apache + template: + metadata: + labels: + app: apache + spec: + containers: + - name: apache + image: httpd:latest + ports: + # Container port where Apache listens + - containerPort: 80 + readinessProbe: + httpGet: + path: / + port: 80 + initialDelaySeconds: 5 + periodSeconds: 10 + livenessProbe: + httpGet: + path: / + port: 80 + initialDelaySeconds: 15 + periodSeconds: 10 + volumeMounts: + - name: apache-htdocs + mountPath: /usr/local/apache2/htdocs/ + volumes: + - name: apache-htdocs + persistentVolumeClaim: + claimName: example-apache-pvc diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-ingress.yaml b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-ingress.yaml new file mode 100644 index 00000000..b26f95bd --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-ingress.yaml @@ -0,0 +1,41 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: apache-ingress + namespace: test + namespace: test + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: 80 + - host: standby.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: 80 + - host: www.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: 80 diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-persistent-volume.yaml b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-persistent-volume.yaml new file mode 100644 index 00000000..7df28e6b --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-persistent-volume.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: example-apache-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/example-apache + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: example-apache-pvc + namespace: test +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-service.yaml b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-service.yaml new file mode 100644 index 00000000..1105e3a7 --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache-volume-claim/helm-chart/templates/apache-service.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + app: apache + name: apache-service + namespace: test +spec: + ports: + - name: web + port: 80 + protocol: TCP + # Expose port 80 on the service + targetPort: 80 + selector: + # Link this service to pods with the label app=apache + app: apache diff --git a/gemfeed/examples/conf/f3s/example-apache/Justfile b/gemfeed/examples/conf/f3s/example-apache/Justfile new file mode 100644 index 00000000..579b9253 --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "test" +RELEASE_NAME := "example-apache" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/example-apache/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/example-apache/helm-chart/Chart.yaml new file mode 100644 index 00000000..6d496436 --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache/helm-chart/Chart.yaml @@ -0,0 +1,5 @@ +apiVersion: v2 +name: apache +description: A Helm chart for deploying Apache +version: 0.1.0 +appVersion: "1.0" diff --git a/gemfeed/examples/conf/f3s/example-apache/helm-chart/README.md b/gemfeed/examples/conf/f3s/example-apache/helm-chart/README.md new file mode 100644 index 00000000..4eb16d4f --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache/helm-chart/README.md @@ -0,0 +1,11 @@ +# Apache Helm Chart + +This chart deploys a simple Apache web server. + +## Installing the Chart + +To install the chart with the release name `my-release`, run the following command: + +```bash +helm install example-apache . --namespace test --create-namespace +```
\ No newline at end of file diff --git a/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-deployment.yaml b/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-deployment.yaml new file mode 100644 index 00000000..364de1da --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-deployment.yaml @@ -0,0 +1,21 @@ +# Apache HTTP Server Deployment +apiVersion: apps/v1 +kind: Deployment +metadata: + name: apache-deployment +spec: + replicas: 1 + selector: + matchLabels: + app: apache + template: + metadata: + labels: + app: apache + spec: + containers: + - name: apache + image: httpd:latest + ports: + # Container port where Apache listens + - containerPort: 80 diff --git a/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-ingress.yaml b/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-ingress.yaml new file mode 100644 index 00000000..aa575edd --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-ingress.yaml @@ -0,0 +1,40 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: apache-ingress + namespace: test + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: 80 + - host: standby.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: 80 + - host: www.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: apache-service + port: + number: 80 diff --git a/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-service.yaml b/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-service.yaml new file mode 100644 index 00000000..93b24acb --- /dev/null +++ b/gemfeed/examples/conf/f3s/example-apache/helm-chart/templates/apache-service.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + app: apache + name: apache-service +spec: + ports: + - name: web + port: 80 + protocol: TCP + # Expose port 80 on the service + targetPort: 80 + selector: + # Link this service to pods with the label app=apache + app: apache diff --git a/gemfeed/examples/conf/f3s/freshrss/Justfile b/gemfeed/examples/conf/f3s/freshrss/Justfile new file mode 100644 index 00000000..d88fe3d4 --- /dev/null +++ b/gemfeed/examples/conf/f3s/freshrss/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "services" +RELEASE_NAME := "freshrss" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/freshrss/README.md b/gemfeed/examples/conf/f3s/freshrss/README.md new file mode 100644 index 00000000..1a883725 --- /dev/null +++ b/gemfeed/examples/conf/f3s/freshrss/README.md @@ -0,0 +1,29 @@ +# FreshRSS Helm Chart + +This chart deploys FreshRSS using a single Deployment, Service, Ingress, and a hostPath-backed PersistentVolume/PersistentVolumeClaim for data. + +## Prerequisites + +Before installing the chart, you must manually create the hostPath directory used by the PersistentVolume (see `templates/persistent-volumes.yaml`): + +- `/data/nfs/k3svolumes/freshrss/data` + +Example commands: + +```bash +sudo mkdir -p /data/nfs/k3svolumes/freshrss/data +# Ensure write permissions for the runtime user/group (nobody:nogroup = 65534:65534) +sudo chown -R 65534:65534 /data/nfs/k3svolumes/freshrss/data +``` + +## Installing the Chart + +To install the chart with the release name `freshrss`, run: + +```bash +helm install freshrss . --namespace services --create-namespace +``` + +## Access + +- Ingress host: `freshrss.f3s.lan.buetow.org` diff --git a/gemfeed/examples/conf/f3s/freshrss/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/freshrss/helm-chart/Chart.yaml new file mode 100644 index 00000000..05cd76a0 --- /dev/null +++ b/gemfeed/examples/conf/f3s/freshrss/helm-chart/Chart.yaml @@ -0,0 +1,6 @@ +apiVersion: v2 +name: freshrss +description: A Helm chart for deploying FreshRSS. +version: 0.1.0 +appVersion: "latest" + diff --git a/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/deployment.yaml new file mode 100644 index 00000000..99f114cb --- /dev/null +++ b/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/deployment.yaml @@ -0,0 +1,48 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: freshrss + namespace: services +spec: + replicas: 1 + selector: + matchLabels: + app: freshrss + template: + metadata: + labels: + app: freshrss + spec: + securityContext: + runAsUser: 65534 # nobody + runAsGroup: 65534 # nobody / nogroup + fsGroup: 65534 # ensure mounted volumes are group-writable + runAsNonRoot: true + containers: + - name: freshrss + image: freshrss/freshrss:latest + ports: + - containerPort: 80 + volumeMounts: + - name: freshrss-data + mountPath: /var/www/FreshRSS/data + volumes: + - name: freshrss-data + persistentVolumeClaim: + claimName: freshrss-data-pvc +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: freshrss + name: freshrss-service + namespace: services +spec: + ports: + - name: web + port: 80 + protocol: TCP + targetPort: 80 + selector: + app: freshrss diff --git a/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/ingress.yaml new file mode 100644 index 00000000..67409615 --- /dev/null +++ b/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/ingress.yaml @@ -0,0 +1,21 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: freshrss-ingress + namespace: services + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: freshrss.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: freshrss-service + port: + number: 80 + diff --git a/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/persistent-volumes.yaml new file mode 100644 index 00000000..813d2acb --- /dev/null +++ b/gemfeed/examples/conf/f3s/freshrss/helm-chart/templates/persistent-volumes.yaml @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: freshrss-data-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/freshrss/data + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: freshrss-data-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + diff --git a/gemfeed/examples/conf/f3s/miniflux/Justfile b/gemfeed/examples/conf/f3s/miniflux/Justfile new file mode 100644 index 00000000..5becacfe --- /dev/null +++ b/gemfeed/examples/conf/f3s/miniflux/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "services" +RELEASE_NAME := "miniflux" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/miniflux/README.md b/gemfeed/examples/conf/f3s/miniflux/README.md new file mode 100644 index 00000000..8795b457 --- /dev/null +++ b/gemfeed/examples/conf/f3s/miniflux/README.md @@ -0,0 +1,56 @@ +# Miniflux Helm Chart + +This chart deploys Miniflux. + +## Prerequisites + +Before installing the chart, you must manually create the following: + +1. **Database Password Secret:** + + Create a secret that contains only the database password. The chart reads + this value and constructs the Miniflux `DATABASE_URL` internally at runtime: + + ```bash + kubectl create secret generic miniflux-db-password \ + --from-literal=fluxdb_password='YOUR_PASSWORD' \ + -n services + ``` + + Replace `YOUR_PASSWORD` with your desired database password. You do not + need to provide a full DSN in the secret; the chart uses the password from + `fluxdb_password` to build: + + `postgres://miniflux:${POSTGRES_PASSWORD}@miniflux-postgres:5432/miniflux?sslmode=disable` + +2. **Admin Password Secret:** + + Create a secret for the initial Miniflux admin user password. The chart + reads this secret into the `ADMIN_PASSWORD` environment variable during + the first startup to create the admin user. The admin username is set + to `admin` in the deployment template. + + ```bash + kubectl create secret generic miniflux-admin-password \ + --from-literal=admin_password='YOUR_ADMIN_PASSWORD' \ + -n services + ``` + + Replace `YOUR_ADMIN_PASSWORD` with your desired password. The secret key + used by the chart is `admin_password`. + +3. **Persistent Volume Directory:** + + You must manually create the directory on your host system to be used by the persistent volume: + + ```bash + mkdir -p /data/nfs/k3svolumes/miniflux/data + ``` + +## Installing the Chart + +To install the chart with the release name `miniflux`, run the following command: + +```bash +helm install miniflux . --namespace services --create-namespace +``` diff --git a/gemfeed/examples/conf/f3s/miniflux/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/miniflux/helm-chart/Chart.yaml new file mode 100644 index 00000000..f88e3f3d --- /dev/null +++ b/gemfeed/examples/conf/f3s/miniflux/helm-chart/Chart.yaml @@ -0,0 +1,5 @@ +apiVersion: v2 +name: miniflux +description: A Helm chart for deploying Miniflux. +version: 0.1.0 +appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/deployment.yaml new file mode 100644 index 00000000..08647a73 --- /dev/null +++ b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/deployment.yaml @@ -0,0 +1,92 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: miniflux-server + labels: + app: miniflux-server +spec: + replicas: 1 + selector: + matchLabels: + app: miniflux-server + template: + metadata: + labels: + app: miniflux-server + spec: + initContainers: + - name: wait-for-postgres + image: postgres:17 + command: ["/bin/sh", "-c"] + args: + - | + echo "Waiting for Postgres at miniflux-postgres:5432..."; + until pg_isready -h miniflux-postgres -p 5432 -U miniflux; do + echo "Postgres not ready, sleeping..."; + sleep 2; + done; + echo "Postgres is ready." + containers: + - name: miniflux + image: miniflux/miniflux:latest + ports: + - containerPort: 8080 + env: + - name: CREATE_ADMIN + value: "1" + - name: ADMIN_USERNAME + value: "admin" + - name: ADMIN_PASSWORD + valueFrom: + secretKeyRef: + name: miniflux-admin-password + key: admin_password + - name: RUN_MIGRATIONS + value: "1" + - name: POLLING_FREQUENCY + value: "10" + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: miniflux-db-password + key: fluxdb_password + command: ["/bin/sh", "-c"] + args: + - export DATABASE_URL="postgres://miniflux:${POSTGRES_PASSWORD}@miniflux-postgres:5432/miniflux?sslmode=disable"; exec /usr/bin/miniflux +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: miniflux-postgres + labels: + app: miniflux-postgres +spec: + replicas: 1 + selector: + matchLabels: + app: miniflux-postgres + template: + metadata: + labels: + app: miniflux-postgres + spec: + containers: + - name: miniflux-postgres + image: postgres:17 + ports: + - containerPort: 5432 + env: + - name: POSTGRES_USER + value: "miniflux" + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: miniflux-db-password + key: fluxdb_password + volumeMounts: + - name: miniflux-postgres-data + mountPath: /var/lib/postgresql/data + volumes: + - name: miniflux-postgres-data + persistentVolumeClaim: + claimName: miniflux-postgres-pvc diff --git a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/ingress.yaml new file mode 100644 index 00000000..95f18389 --- /dev/null +++ b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/ingress.yaml @@ -0,0 +1,20 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: miniflux-ingress + namespace: services + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: flux.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: miniflux + port: + number: 8080 diff --git a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/persistent-volumes.yaml new file mode 100644 index 00000000..2c4331c8 --- /dev/null +++ b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/persistent-volumes.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: miniflux-postgres-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/miniflux/data + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: miniflux-postgres-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/service.yaml b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/service.yaml new file mode 100644 index 00000000..6855888f --- /dev/null +++ b/gemfeed/examples/conf/f3s/miniflux/helm-chart/templates/service.yaml @@ -0,0 +1,23 @@ +apiVersion: v1 +kind: Service +metadata: + name: miniflux +spec: + selector: + app: miniflux-server + ports: + - protocol: TCP + port: 8080 + targetPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + name: miniflux-postgres +spec: + selector: + app: miniflux-postgres + ports: + - protocol: TCP + port: 5432 + targetPort: 5432 diff --git a/gemfeed/examples/conf/f3s/opodsync/Justfile b/gemfeed/examples/conf/f3s/opodsync/Justfile new file mode 100644 index 00000000..3143637b --- /dev/null +++ b/gemfeed/examples/conf/f3s/opodsync/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "services" +RELEASE_NAME := "opodsync" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}}
\ No newline at end of file diff --git a/gemfeed/examples/conf/f3s/opodsync/README.md b/gemfeed/examples/conf/f3s/opodsync/README.md new file mode 100644 index 00000000..fd17938a --- /dev/null +++ b/gemfeed/examples/conf/f3s/opodsync/README.md @@ -0,0 +1,11 @@ +# opodsync + +This Helm chart deploys the opodsync. + +## Manual steps + +Before deploying, you need to create the following directory on your NFS share: + +```bash +mkdir -p /data/nfs/k3svolumes/opodsync/data +``` diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/Chart.yaml new file mode 100644 index 00000000..8d41abe1 --- /dev/null +++ b/gemfeed/examples/conf/f3s/opodsync/helm-chart/Chart.yaml @@ -0,0 +1,5 @@ +apiVersion: v2 +name: opodsync +description: A Helm chart for deploying the opodsync. +version: 0.1.0 +appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/configmap-nginx.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/configmap-nginx.yaml new file mode 100644 index 00000000..b4c2ef62 --- /dev/null +++ b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/configmap-nginx.yaml @@ -0,0 +1,46 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: opodsync-nginx-config + namespace: services +data: + nginx.conf: | + worker_processes 1; + events { worker_connections 1024; } + http { + variables_hash_bucket_size 128; + include mime.types; + default_type application/octet-stream; + sendfile on; + keepalive_timeout 65; + + upstream backend { + server 127.0.0.1:8080; + } + + server { + listen 8081; + + # Preserve client details + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + + # Root path internally proxies to /gpodder on backend + location = / { + proxy_pass http://backend/gpodder; + } + + # Pass through existing /gpodder paths unchanged + location /gpodder { + proxy_pass http://backend; + } + + # Fallback: proxy everything else as-is + location / { + proxy_pass http://backend; + } + } + } + diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/deployment.yaml new file mode 100644 index 00000000..b0f11d9e --- /dev/null +++ b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/deployment.yaml @@ -0,0 +1,43 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: opodsync + namespace: services +spec: + replicas: 1 + selector: + matchLabels: + app: opodsync + template: + metadata: + labels: + app: opodsync + spec: + containers: + - name: opodsync + image: ganeshlab/opodsync + env: + - name: GPODDER_BASE_URL + value: "https://gpodder.f3s.buetow.org/gpodder" + - name: GPODDER_ALLOW_REGISTRATIONS + value: "true" + ports: + - containerPort: 8080 + volumeMounts: + - name: opodsync-data + mountPath: /var/www/server/data + - name: nginx-proxy + image: nginx:1.25-alpine + ports: + - containerPort: 8081 + volumeMounts: + - name: nginx-config + mountPath: /etc/nginx/nginx.conf + subPath: nginx.conf + volumes: + - name: opodsync-data + persistentVolumeClaim: + claimName: opodsync-data-pvc + - name: nginx-config + configMap: + name: opodsync-nginx-config diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/ingress.yaml new file mode 100644 index 00000000..a29d27bf --- /dev/null +++ b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/ingress.yaml @@ -0,0 +1,20 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: opodsync-ingress + namespace: services + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: gpodder.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: opodsync-service + port: + number: 80 diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/persistent-volumes.yaml new file mode 100644 index 00000000..0a6dedc0 --- /dev/null +++ b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/persistent-volumes.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: opodsync-data-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/opodsync/data + type: DirectoryOrCreate +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: opodsync-data-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi
\ No newline at end of file diff --git a/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/service.yaml b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/service.yaml new file mode 100644 index 00000000..16763f03 --- /dev/null +++ b/gemfeed/examples/conf/f3s/opodsync/helm-chart/templates/service.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + app: opodsync + name: opodsync-service + namespace: services +spec: + ports: + - name: web + port: 80 + protocol: TCP + targetPort: 8081 + selector: + app: opodsync diff --git a/gemfeed/examples/conf/f3s/radicale/Justfile b/gemfeed/examples/conf/f3s/radicale/Justfile new file mode 100644 index 00000000..6be7406a --- /dev/null +++ b/gemfeed/examples/conf/f3s/radicale/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "services" +RELEASE_NAME := "radicale" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/radicale/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/radicale/helm-chart/Chart.yaml new file mode 100644 index 00000000..421dd485 --- /dev/null +++ b/gemfeed/examples/conf/f3s/radicale/helm-chart/Chart.yaml @@ -0,0 +1,5 @@ +apiVersion: v2 +name: radicale +description: A Helm chart for deploying a gpodder sync server. +version: 0.1.0 +appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/radicale/helm-chart/README.md b/gemfeed/examples/conf/f3s/radicale/helm-chart/README.md new file mode 100644 index 00000000..6f4f28f7 --- /dev/null +++ b/gemfeed/examples/conf/f3s/radicale/helm-chart/README.md @@ -0,0 +1,18 @@ +# Radicale Helm Chart + +This chart deploys a gpodder sync server using Radicale. + +## Prerequisites + +Before installing the chart, you must manually create the following directories on your host system to be used by the persistent volumes: + +- `/data/nfs/k3svolumes/radicale/collections` +- `/data/nfs/k3svolumes/radicale/auth` + +## Installing the Chart + +To install the chart with the release name `radicale`, run the following command: + +```bash +helm install radicale . --namespace services --create-namespace +``` diff --git a/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/deployment.yaml new file mode 100644 index 00000000..725fcba1 --- /dev/null +++ b/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/deployment.yaml @@ -0,0 +1,67 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: radicale + namespace: services +spec: + replicas: 1 + selector: + matchLabels: + app: radicale + template: + metadata: + labels: + app: radicale + spec: + initContainers: + - name: debug-auth-and-mounts + image: busybox:1.36 + command: ["/bin/sh", "-c"] + args: + - | + set -eu + echo "=== /proc/mounts ===" && cat /proc/mounts || true + echo "=== df -h ===" && df -h || true + echo "=== ls -lna / ===" && ls -lna / || true + echo "=== ls -lna /auth ===" && ls -lna /auth || true + echo "=== ls -lna /collections ===" && ls -lna /collections || true + echo "=== find /auth (maxdepth 2) ===" && find /auth -maxdepth 2 || true + [ -f /auth/htpasswd ] && { echo "=== stat /auth/htpasswd ==="; stat /auth/htpasswd || true; } || echo "htpasswd missing in init" + volumeMounts: + - name: radicale-collections + mountPath: /collections + - name: radicale-auth + mountPath: /auth + containers: + - name: radicale + image: registry.lan.buetow.org:30001/radicale:latest + ports: + - containerPort: 8080 + volumeMounts: + - name: radicale-collections + mountPath: /collections + - name: radicale-auth + mountPath: /auth + volumes: + - name: radicale-collections + persistentVolumeClaim: + claimName: radicale-collections-pvc + - name: radicale-auth + persistentVolumeClaim: + claimName: radicale-auth-pvc +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: radicale + name: radicale-service + namespace: services +spec: + ports: + - name: web + port: 80 + protocol: TCP + targetPort: 8080 + selector: + app: radicale diff --git a/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/ingress.yaml new file mode 100644 index 00000000..680ab7d8 --- /dev/null +++ b/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/ingress.yaml @@ -0,0 +1,20 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: radicale-ingress + namespace: services + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: radicale.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: radicale-service + port: + number: 80 diff --git a/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/persistent-volumes.yaml new file mode 100644 index 00000000..95d64883 --- /dev/null +++ b/gemfeed/examples/conf/f3s/radicale/helm-chart/templates/persistent-volumes.yaml @@ -0,0 +1,55 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: radicale-collections-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/radicale/collections + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: radicale-collections-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: radicale-auth-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/radicale/auth + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: radicale-auth-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/gemfeed/examples/conf/f3s/registry/Justfile b/gemfeed/examples/conf/f3s/registry/Justfile new file mode 100644 index 00000000..297d95a7 --- /dev/null +++ b/gemfeed/examples/conf/f3s/registry/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "infra" +RELEASE_NAME := "registry" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/registry/README.md b/gemfeed/examples/conf/f3s/registry/README.md new file mode 100644 index 00000000..bcf30a3a --- /dev/null +++ b/gemfeed/examples/conf/f3s/registry/README.md @@ -0,0 +1,69 @@ +# Private Docker Registry + +This document describes how to push Docker images to the private registry deployed in your Kubernetes cluster. + +## Prerequisites + +* A running Kubernetes cluster. +* `kubectl` configured to connect to your cluster. +* Docker installed and running on your local machine. + +## Steps + +0. **Create the registry directory in the NFS share** + +1. **Tag your Docker image:** + + Replace `<your-image>` with the name of your local Docker image and `<node-ip>` with the IP address of any node in your Kubernetes cluster. The registry is available on NodePort `30001`. + + ```bash + docker tag <your-image> <node-ip>:30001/<your-image> + ``` + +2. **Push the image to the registry:** + + ```bash + docker push <node-ip>:30001/<your-image> + ``` + +3. **Pull the image from the registry (from a Kubernetes pod):** + + You can now use the image in your Kubernetes deployments by referencing it as `docker-registry-service:5000/<your-image>`. + +## Communication + +The Docker registry is exposed via a static NodePort (`30001`) and uses plain HTTP. It is not configured for TLS. + + + First, run this command to create or update the configuration file. This command will overwrite the file if it exists. + + 1 sudo bash -c 'echo "{ \\"insecure-registries\\": [\\"r0.lan.buetow.org:30001\\",\\"r1.lan.buetow.org:30001\\",\\"r2.lan.buetow.org:30001\\"] }" > /etc/docker/daemon.json' + + After running that command, you need to restart your Docker daemon for the changes to take effect. + + 1 sudo systemctl restart docker + + +And afterwards I could push the anky-sync-server image. + +## K3s Configuration + +To use the private registry from within the k3s cluster, you need to configure each k3s node. + +### 1. Update /etc/hosts +On each k3s node, you must ensure that `registry.lan.buetow.org` resolves to the node's loopback address. You can do this by adding an entry to the `/etc/hosts` file. + +Run the following command, which will add the entry to `r0`, `r1`, and `r2`: +```bash +for node in r0 r1 r2; do ssh root@$node "echo '127.0.0.1 registry.lan.buetow.org' >> /etc/hosts"; done +``` + +### 2. Configure K3s to trust the insecure registry +You need to configure each k3s node to trust the insecure registry. This is done by creating a `registries.yaml` file in `/etc/rancher/k3s/` on each node. + +The following command will create the file and restart the k3s service. You will need to run this for each node (`r0`, `r1`, `r2`): + +```bash +ssh root@<node> "echo -e 'mirrors:\n "registry.lan.buetow.org:30001":\n endpoint:\n - "http://localhost:30001"' > /etc/rancher/k3s/registries.yaml && systemctl restart k3s" +``` + diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/registry/helm-chart/Chart.yaml new file mode 100644 index 00000000..0f7d68fa --- /dev/null +++ b/gemfeed/examples/conf/f3s/registry/helm-chart/Chart.yaml @@ -0,0 +1,5 @@ +apiVersion: v2 +name: registry +description: A Helm chart for deploying a private Docker registry. +version: 0.1.0 +appVersion: "2.0" diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/README.md b/gemfeed/examples/conf/f3s/registry/helm-chart/README.md new file mode 100644 index 00000000..42694360 --- /dev/null +++ b/gemfeed/examples/conf/f3s/registry/helm-chart/README.md @@ -0,0 +1,11 @@ +# Docker Registry Helm Chart + +This chart deploys a simple Docker registry. + +## Installing the Chart + +To install the chart with the release name `my-release`, run the following command: + +```bash +helm install registry . +``` diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/deployment.yaml new file mode 100644 index 00000000..70522f8d --- /dev/null +++ b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/deployment.yaml @@ -0,0 +1,29 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: docker-registry + namespace: infra + labels: + app: docker-registry +spec: + replicas: 1 + selector: + matchLabels: + app: docker-registry + template: + metadata: + labels: + app: docker-registry + spec: + containers: + - name: registry + image: registry:2 + ports: + - containerPort: 5000 + volumeMounts: + - name: registry-storage + mountPath: /var/lib/registry + volumes: + - name: registry-storage + persistentVolumeClaim: + claimName: docker-registry-pvc diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pv.yaml b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pv.yaml new file mode 100644 index 00000000..fb747ca0 --- /dev/null +++ b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pv.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: docker-registry-pv +spec: + capacity: + storage: 5Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/registry + type: Directory diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pvc.yaml b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pvc.yaml new file mode 100644 index 00000000..e769c893 --- /dev/null +++ b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/pvc.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: docker-registry-pvc + namespace: infra +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi diff --git a/gemfeed/examples/conf/f3s/registry/helm-chart/templates/service.yaml b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/service.yaml new file mode 100644 index 00000000..a97f14e0 --- /dev/null +++ b/gemfeed/examples/conf/f3s/registry/helm-chart/templates/service.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Service +metadata: + name: docker-registry-service + namespace: infra +spec: + selector: + app: docker-registry + ports: + - protocol: TCP + port: 5000 + targetPort: 5000 + nodePort: 30001 + type: NodePort diff --git a/gemfeed/examples/conf/f3s/syncthing/Justfile b/gemfeed/examples/conf/f3s/syncthing/Justfile new file mode 100644 index 00000000..4be94ee2 --- /dev/null +++ b/gemfeed/examples/conf/f3s/syncthing/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "services" +RELEASE_NAME := "syncthing" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/syncthing/README.md b/gemfeed/examples/conf/f3s/syncthing/README.md new file mode 100644 index 00000000..3e2344ab --- /dev/null +++ b/gemfeed/examples/conf/f3s/syncthing/README.md @@ -0,0 +1,20 @@ +# Syncthing Kubernetes Deployment + +This directory contains the Kubernetes configuration for deploying Syncthing. + +## Deployment + +To deploy Syncthing, apply the Kubernetes manifests in this directory: + +```bash +make apply +``` + +## Configuration + +The deployment uses two persistent volumes: +- `syncthing-config-pv`: for the syncthing configuration. Mapped to `/data/nfs/k3svolumes/syncthing/config` on the host. +- `syncthing-data-pv`: for the syncthing data. Mapped to `/data/nfs/k3svolumes/syncthing/data` on the host. + +The web UI is available at http://syncthing.f3s.buetow.org. +The data port is exposed on port 22000. diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/syncthing/helm-chart/Chart.yaml new file mode 100644 index 00000000..2b982524 --- /dev/null +++ b/gemfeed/examples/conf/f3s/syncthing/helm-chart/Chart.yaml @@ -0,0 +1,5 @@ +apiVersion: v2 +name: syncthing +description: A Helm chart for deploying Syncthing. +version: 0.1.0 +appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/README.md b/gemfeed/examples/conf/f3s/syncthing/helm-chart/README.md new file mode 100644 index 00000000..0cc23919 --- /dev/null +++ b/gemfeed/examples/conf/f3s/syncthing/helm-chart/README.md @@ -0,0 +1,11 @@ +# Syncthing Helm Chart + +This chart deploys Syncthing. + +## Installing the Chart + +To install the chart with the release name `my-release`, run the following command: + +```bash +helm install syncthing . --namespace services --create-namespace +``` diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/deployment.yaml new file mode 100644 index 00000000..9a85a174 --- /dev/null +++ b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/deployment.yaml @@ -0,0 +1,33 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: syncthing + namespace: services +spec: + replicas: 1 + selector: + matchLabels: + app: syncthing + template: + metadata: + labels: + app: syncthing + spec: + containers: + - name: syncthing + image: lscr.io/linuxserver/syncthing:latest + ports: + - containerPort: 8384 + - containerPort: 22000 + volumeMounts: + - name: syncthing-config + mountPath: /config + - name: syncthing-data + mountPath: /data + volumes: + - name: syncthing-config + persistentVolumeClaim: + claimName: syncthing-config-pvc + - name: syncthing-data + persistentVolumeClaim: + claimName: syncthing-data-pvc diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/ingress.yaml new file mode 100644 index 00000000..b1e68e1f --- /dev/null +++ b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/ingress.yaml @@ -0,0 +1,20 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: syncthing-ingress + namespace: services + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: syncthing.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: syncthing-service + port: + number: 8384 diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/persistent-volume.yaml b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/persistent-volume.yaml new file mode 100644 index 00000000..793ae608 --- /dev/null +++ b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/persistent-volume.yaml @@ -0,0 +1,55 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: syncthing-config-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/syncthing/config + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: syncthing-config-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: syncthing-data-pv +spec: + capacity: + storage: 300Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/syncthing/data + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: syncthing-data-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 300Gi
\ No newline at end of file diff --git a/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/service.yaml b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/service.yaml new file mode 100644 index 00000000..74bf5ed4 --- /dev/null +++ b/gemfeed/examples/conf/f3s/syncthing/helm-chart/templates/service.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + app: syncthing + name: syncthing-service + namespace: services +spec: + ports: + - name: web + port: 8384 + protocol: TCP + targetPort: 8384 + - name: data + port: 22000 + protocol: TCP + targetPort: 22000 + selector: + app: syncthing diff --git a/gemfeed/examples/conf/f3s/wallabag/Justfile b/gemfeed/examples/conf/f3s/wallabag/Justfile new file mode 100644 index 00000000..6c3a8818 --- /dev/null +++ b/gemfeed/examples/conf/f3s/wallabag/Justfile @@ -0,0 +1,12 @@ +NAMESPACE := "services" +RELEASE_NAME := "wallabag" +CHART_PATH := "./helm-chart" + +install: + helm install {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} --create-namespace + +upgrade: + helm upgrade {{RELEASE_NAME}} {{CHART_PATH}} --namespace {{NAMESPACE}} + +delete: + helm uninstall {{RELEASE_NAME}} --namespace {{NAMESPACE}} diff --git a/gemfeed/examples/conf/f3s/wallabag/helm-chart/Chart.yaml b/gemfeed/examples/conf/f3s/wallabag/helm-chart/Chart.yaml new file mode 100644 index 00000000..2fb05aba --- /dev/null +++ b/gemfeed/examples/conf/f3s/wallabag/helm-chart/Chart.yaml @@ -0,0 +1,5 @@ +apiVersion: v2 +name: wallabag +description: A Helm chart for deploying Wallabag. +version: 0.1.0 +appVersion: "latest" diff --git a/gemfeed/examples/conf/f3s/wallabag/helm-chart/README.md b/gemfeed/examples/conf/f3s/wallabag/helm-chart/README.md new file mode 100644 index 00000000..5db600b9 --- /dev/null +++ b/gemfeed/examples/conf/f3s/wallabag/helm-chart/README.md @@ -0,0 +1,18 @@ +# Wallabag Helm Chart + +This chart deploys Wallabag. + +## Prerequisites + +Before installing the chart, you must manually create the following directories on your host system to be used by the persistent volumes: + +- `/data/nfs/k3svolumes/wallabag/data` +- `/data/nfs/k3svolumes/wallabag/images` + +## Installing the Chart + +To install the chart with the release name `my-release`, run the following command: + +```bash +helm install wallabag . --namespace services --create-namespace +``` diff --git a/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/deployment.yaml b/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/deployment.yaml new file mode 100644 index 00000000..25dcffdc --- /dev/null +++ b/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/deployment.yaml @@ -0,0 +1,51 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: wallabag + namespace: services +spec: + replicas: 1 + selector: + matchLabels: + app: wallabag + template: + metadata: + labels: + app: wallabag + spec: + containers: + - name: wallabag + image: wallabag/wallabag + ports: + - containerPort: 80 + env: + - name: SYMFONY__ENV__DOMAIN_NAME + value: "https://bag.f3s.buetow.org" + volumeMounts: + - name: wallabag-data + mountPath: /var/www/wallabag/data + - name: wallabag-images + mountPath: /var/www/wallabag/web/assets/images + volumes: + - name: wallabag-data + persistentVolumeClaim: + claimName: wallabag-data-pvc + - name: wallabag-images + persistentVolumeClaim: + claimName: wallabag-images-pvc +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: wallabag + name: wallabag-service + namespace: services +spec: + ports: + - name: web + port: 80 + protocol: TCP + targetPort: 80 + selector: + app: wallabag diff --git a/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/ingress.yaml b/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/ingress.yaml new file mode 100644 index 00000000..deb489aa --- /dev/null +++ b/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/ingress.yaml @@ -0,0 +1,20 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: wallabag-ingress + namespace: services + annotations: + spec.ingressClassName: traefik + traefik.ingress.kubernetes.io/router.entrypoints: web +spec: + rules: + - host: bag.f3s.buetow.org + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: wallabag-service + port: + number: 80 diff --git a/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/persistent-volumes.yaml b/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/persistent-volumes.yaml new file mode 100644 index 00000000..6f5346aa --- /dev/null +++ b/gemfeed/examples/conf/f3s/wallabag/helm-chart/templates/persistent-volumes.yaml @@ -0,0 +1,55 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: wallabag-data-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/wallabag/data + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: wallabag-data-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: wallabag-images-pv +spec: + capacity: + storage: 1Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /data/nfs/k3svolumes/wallabag/images + type: Directory +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: wallabag-images-pvc + namespace: services +spec: + storageClassName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/gemfeed/examples/conf/frontends/README.md b/gemfeed/examples/conf/frontends/README.md new file mode 100644 index 00000000..e2d59d95 --- /dev/null +++ b/gemfeed/examples/conf/frontends/README.md @@ -0,0 +1,3 @@ +# Frontends + +Rexify my internet facing frontend servers! diff --git a/gemfeed/examples/conf/frontends/Rexfile b/gemfeed/examples/conf/frontends/Rexfile new file mode 100644 index 00000000..0079387e --- /dev/null +++ b/gemfeed/examples/conf/frontends/Rexfile @@ -0,0 +1,648 @@ +# How to use: +# +# rex commons +# +# Why use Rex to automate my servers? Because Rex is KISS, Puppet, SALT and Chef +# are not. So, why not use Ansible then? To use Ansible correctly you should also +# install Python on the target machines (not mandatory, though. But better). +# Rex is programmed in Perl and there is already Perl in the base system of OpenBSD. +# Also, I find Perl > Python (my personal opinion). + +use Rex -feature => [ '1.14', 'exec_autodie' ]; +use Rex::Logger; +use File::Slurp; + +# REX CONFIG SECTION + +group frontends => 'blowfish.buetow.org:2', 'fishfinger.buetow.org:2'; +our $ircbouncer_server = 'fishfinger.buetow.org:2'; +group ircbouncer => $ircbouncer_server; +group openbsd_canary => 'fishfinger.buetow.org:2'; + +user 'rex'; +sudo TRUE; + +parallelism 5; + +# CUSTOM (PERL-ish) CONFIG SECTION (what Rex can't do by itself) +# Note we using anonymous subs here. This is so we can pass the subs as +# Rex template variables too. + +our %ips = ( + 'fishfinger' => { + 'ipv4' => '46.23.94.99', + 'ipv6' => '2a03:6000:6f67:624::99', + }, + 'blowfish' => { + 'ipv4' => '23.88.35.144', + 'ipv6' => '2a01:4f8:c17:20f1::42', + }, + 'domain' => 'buetow.org', +); + +$ips{current_master} = $ips{fishfinger}; +$ips{current_master}{fqdn} = 'fishfinger.' . $ips{domain}; + +$ips{current_standby} = $ips{blowfish}; +$ips{current_standby}{fqdn} = 'blowfish.' . $ips{domain}; + +# Gather IPv6 addresses based on hostname. +our $ipv6address = sub { + my $hostname = shift; + my $ip = $ips{$hostname}{ipv6}; + unless ( defined $ip ) { + Rex::Logger::info( "Unable to determine IPv6 address for $hostname", 'error' ); + return '::1'; + } + return $ip; +}; + +# Bootstrapping the FQDN based on the server IP as the hostname and domain +# facts aren't set yet due to the myname file in the first place. +our $fqdns = sub { + my $ipv4 = shift; + while ( my ( $hostname, $ips ) = each %ips ) { + return "$hostname." . $ips{domain} if $ips->{ipv4} eq $ipv4; + } + Rex::Logger::info( "Unable to determine hostname for $ipv4", 'error' ); + return 'HOSTNAME-UNKNOWN.' . $ips{domain}; +}; + +# TODO: Rename rexfilesecrets.txt to confsecrets.txt?! Or wait for RCM migration. +# The secret store. Note to myself: "geheim cat rexfilesecrets.txt" +our $secrets = sub { read_file './secrets/' . shift }; + +our @dns_zones = qw/buetow.org dtail.dev foo.zone irregular.ninja snonux.foo paul.cyou/; +our @dns_zones_remove = qw//; + +# k3s cluster running on FreeBSD in my LAN +our @f3s_hosts = + qw/f3s.buetow.org anki.f3s.buetow.org bag.f3s.buetow.org flux.f3s.buetow.org audiobookshelf.f3s.buetow.org gpodder.f3s.buetow.org radicale.f3s.buetow.org vault.f3s.buetow.org syncthing.f3s.buetow.org uprecords.f3s.buetow.org/; + +# optionally, only enable manually for temp time, as no password protection yet +# push @f3s_hosts, 'registry.f3s.buetow.org'; + +our @acme_hosts = + qw/buetow.org git.buetow.org paul.buetow.org joern.buetow.org dory.buetow.org ecat.buetow.org blog.buetow.org fotos.buetow.org znc.buetow.org dtail.dev foo.zone stats.foo.zone irregular.ninja alt.irregular.ninja snonux.foo/; +push @acme_hosts, @f3s_hosts; + +# UTILITY TASKS + +task 'id', group => 'frontends', sub { say run 'id' }; +task 'dump_info', group => 'frontends', sub { dump_system_information }; + +# OPENBSD TASKS SECTION + +desc 'Install base stuff'; +task 'base', + group => 'frontends', + sub { + pkg 'figlet', ensure => present; + pkg 'tig', ensure => present; + pkg 'vger', ensure => present; + pkg 'zsh', ensure => present; + pkg 'bash', ensure => present; + pkg 'helix', ensure => present; + + my @pkg_scripts = qw/uptimed httpd dserver icinga2/; + push @pkg_scripts, 'znc' if connection->server eq $ircbouncer_server; + my $pkg_scripts = join ' ', @pkg_scripts; + append_if_no_such_line '/etc/rc.conf.local', "pkg_scripts=\"$pkg_scripts\""; + run 'touch /etc/rc.local'; + + file '/etc/myname', + content => template( './etc/myname.tpl', fqdns => $fqdns ), + owner => 'root', + group => 'wheel', + mode => '644'; + }; + +desc 'Setup uptimed'; +task 'uptimed', + group => 'frontends', + sub { + pkg 'uptimed', ensure => present; + service 'uptimed', ensure => 'started'; + }; + +desc 'Setup rsync'; +task 'rsync', + group => 'frontends', + sub { + pkg 'rsync', ensure => present; + + # Not required, as we use rsyncd via inetd + # append_if_no_such_line '/etc/rc.conf.local', 'rsyncd_flags='; + + file '/etc/rsyncd.conf', + content => template('./etc/rsyncd.conf.tpl'), + owner => 'root', + group => 'wheel', + mode => '644'; + + file '/usr/local/bin/rsync.sh', + content => template('./scripts/rsync.sh.tpl'), + owner => 'root', + group => 'wheel', + mode => '755'; + + file '/tmp/rsync.cron', + ensure => 'file', + content => "*/5\t*\t*\t*\t*\t-ns /usr/local/bin/rsync.sh", + mode => '600'; + + run '{ crontab -l -u root ; cat /tmp/rsync.cron; } | uniq | crontab -u root -'; + run 'rm /tmp/rsync.cron'; + }; + +desc 'Configure the gemtexter sites'; +task 'gemtexter', + group => 'frontends', + sub { + file '/usr/local/bin/gemtexter.sh', + content => template('./scripts/gemtexter.sh.tpl'), + owner => 'root', + group => 'wheel', + mode => '744'; + + file '/etc/daily.local', + ensure => 'present', + owner => 'root', + group => 'wheel', + mode => '644'; + + append_if_no_such_line '/etc/daily.local', '/usr/local/bin/gemtexter.sh'; + }; + +desc 'Configure taskwarrior reminder'; +task 'taskwarrior', + group => 'frontends', + sub { + pkg 'taskwarrior', ensure => present; + + file '/usr/local/bin/taskwarrior.sh', + content => template('./scripts/taskwarrior.sh.tpl'), + owner => 'root', + group => 'wheel', + mode => '500'; + + file '/etc/taskrc', + content => template('./etc/taskrc.tpl'), + owner => 'root', + group => 'wheel', + mode => '600'; + + append_if_no_such_line '/etc/daily.local', '/usr/local/bin/taskwarrior.sh'; + }; + +desc 'Configure ACME client'; +task 'acme', + group => 'frontends', + sub { + file '/etc/acme-client.conf', + content => template( './etc/acme-client.conf.tpl', acme_hosts => \@acme_hosts ), + owner => 'root', + group => 'wheel', + mode => '644'; + + file '/usr/local/bin/acme.sh', + content => template( './scripts/acme.sh.tpl', acme_hosts => \@acme_hosts ), + owner => 'root', + group => 'wheel', + mode => '744'; + + file '/etc/daily.local', + ensure => 'present', + owner => 'root', + group => 'wheel', + mode => '644'; + + append_if_no_such_line '/etc/daily.local', '/usr/local/bin/acme.sh'; + }; + +desc 'Invoke ACME client'; +task 'acme_invoke', + group => 'frontends', + sub { + say run '/usr/local/bin/acme.sh'; + }; + +desc 'Setup httpd'; +task 'httpd', + group => 'frontends', + sub { + append_if_no_such_line '/etc/rc.conf.local', 'httpd_flags='; + + file '/etc/httpd.conf', + content => template( './etc/httpd.conf.tpl', acme_hosts => \@acme_hosts ), + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { service 'httpd' => 'restart' }; + + file '/var/www/htdocs/buetow.org', ensure => 'directory'; + file '/var/www/htdocs/buetow.org/self', ensure => 'directory'; + + # For failover health-check. + file '/var/www/htdocs/buetow.org/self/index.txt', + ensure => 'file', + content => template('./var/www/htdocs/buetow.org/self/index.txt.tpl'); + + service 'httpd', ensure => 'started'; + }; + +desc 'Setup inetd'; +task 'inetd', + group => 'frontends', + sub { + append_if_no_such_line '/etc/rc.conf.local', 'inetd_flags='; + + file '/etc/login.conf.d/inetd', + source => './etc/login.conf.d/inetd', + owner => 'root', + group => 'wheel', + mode => '644'; + + file '/etc/inetd.conf', + source => './etc/inetd.conf', + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { service 'inetd' => 'restart' }; + + service 'inetd', ensure => 'started'; + }; + +desc 'Setup relayd'; +task 'relayd', + group => 'frontends', + sub { + append_if_no_such_line '/etc/rc.conf.local', 'relayd_flags='; + + file '/etc/relayd.conf', + content => template( + './etc/relayd.conf.tpl', + ipv6address => $ipv6address, + f3s_hosts => \@f3s_hosts, + acme_hosts => \@acme_hosts + ), + owner => 'root', + group => 'wheel', + mode => '600', + on_change => sub { service 'relayd' => 'restart' }; + + service 'relayd', ensure => 'started'; + append_if_no_such_line '/etc/daily.local', '/usr/sbin/rcctl start relayd'; + }; + +desc 'Setup OpenSMTPD'; +task 'smtpd', + group => 'frontends', + sub { + Rex::Logger::info('Dealing with mail aliases'); + file '/etc/mail/aliases', + source => './etc/mail/aliases', + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { say run 'newaliases' }; + + Rex::Logger::info('Dealing with mail virtual domains'); + file '/etc/mail/virtualdomains', + source => './etc/mail/virtualdomains', + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { service 'smtpd' => 'restart' }; + + Rex::Logger::info('Dealing with mail virtual users'); + file '/etc/mail/virtualusers', + source => './etc/mail/virtualusers', + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { service 'smtpd' => 'restart' }; + + Rex::Logger::info('Dealing with smtpd.conf'); + file '/etc/mail/smtpd.conf', + content => template('./etc/mail/smtpd.conf.tpl'), + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { service 'smtpd' => 'restart' }; + + service 'smtpd', ensure => 'started'; + }; + +desc 'Setup DNS server(s)'; +task 'nsd', + group => 'frontends', + sub { + my $restart = FALSE; + append_if_no_such_line '/etc/rc.conf.local', 'nsd_flags='; + + Rex::Logger::info('Dealing with master DNS key'); + file '/var/nsd/etc/key.conf', + content => template( './var/nsd/etc/key.conf.tpl', nsd_key => $secrets->('/var/nsd/etc/nsd_key.txt') ), + owner => 'root', + group => '_nsd', + mode => '640', + on_change => sub { $restart = TRUE }; + + Rex::Logger::info('Dealing with master DNS config'); + file '/var/nsd/etc/nsd.conf', + content => template( './var/nsd/etc/nsd.conf.master.tpl', dns_zones => \@dns_zones, ), + owner => 'root', + group => '_nsd', + mode => '640', + on_change => sub { $restart = TRUE }; + + for my $zone (@dns_zones) { + Rex::Logger::info("Dealing with DNS zone $zone"); + file "/var/nsd/zones/master/$zone.zone", + content => template( + "./var/nsd/zones/master/$zone.zone.tpl", + ips => \%ips, + f3s_hosts => \@f3s_hosts + ), + owner => 'root', + group => 'wheel', + mode => '644', + on_change => sub { $restart = TRUE }; + } + + for my $zone (@dns_zones_remove) { + Rex::Logger::info("Dealing with DNS zone removal $zone"); + file "/var/nsd/zones/master/$zone.zone", ensure => 'absent'; + } + + service 'nsd' => 'restart' if $restart; + service 'nsd', ensure => 'started'; + }; + +desc 'Setup DNS failover script(s)'; +task 'nsd_failover', + group => 'frontends', + sub { + file '/usr/local/bin/dns-failover.ksh', + source => './scripts/dns-failover.ksh', + owner => 'root', + group => 'wheel', + mode => '500'; + + file '/tmp/root.cron', + ensure => 'file', + content => "*\t*\t*\t*\t*\t-ns /usr/local/bin/dns-failover.ksh", + mode => '600'; + + run '{ crontab -l -u root ; cat /tmp/root.cron; } | uniq | crontab -u root -'; + run 'rm /tmp/root.cron'; + }; + +desc 'Setup DTail'; +task 'dtail', + group => 'frontends', + sub { + my $restart = FALSE; + + run 'adduser -class nologin -group _dserver -batch _dserver', unless => 'id _dserver'; + run 'usermod -d /var/run/dserver _dserver'; + + file '/etc/rc.d/dserver', + content => template('./etc/rc.d/dserver.tpl'), + owner => 'root', + group => 'wheel', + mode => '755', + on_change => sub { $restart = TRUE }; + + file '/etc/dserver', + ensure => 'directory', + owner => 'root', + group => 'wheel', + mode => '755'; + + file '/etc/dserver/dtail.json', + content => template('./etc/dserver/dtail.json.tpl'), + owner => 'root', + group => 'wheel', + mode => '755', + on_change => sub { $restart = TRUE }; + + file '/usr/local/bin/dserver-update-key-cache.sh', + content => template('./scripts/dserver-update-key-cache.sh.tpl'), + owner => 'root', + group => 'wheel', + mode => '500'; + + append_if_no_such_line '/etc/daily.local', '/usr/local/bin/dserver-update-key-cache.sh'; + + service 'dserver' => 'restart' if $restart; + service 'dserver', ensure => 'started'; + }; + +desc 'Installing Gogios binary'; +task 'gogios_install', + group => 'frontends', + sub { + file '/usr/local/bin/gogios', + source => 'usr/local/bin/gogios', + mode => '0755'; + owner => 'root', + group => 'root'; + }; + +desc 'Setup Gogios monitoring system'; +task 'gogios', + group => 'frontends', + sub { + pkg 'monitoring-plugins', ensure => present; + pkg 'nrpe', ensure => present; + + my $gogios_path = '/usr/local/bin/gogios'; + + unless ( is_file($gogios_path) ) { + Rex::Logger::info( "Gogios not installed to $gogios_path! Run task 'gogios_install'", 'error' ); + } + + run 'adduser -group _gogios -batch _gogios', unless => 'id _gogios'; + run 'usermod -d /var/run/gogios _gogios'; + + file '/etc/gogios.json', + content => template( './etc/gogios.json.tpl', acme_hosts => \@acme_hosts ), + owner => 'root', + group => 'wheel', + mode => '744'; + + file '/var/run/gogios', + ensure => 'directory', + owner => '_gogios', + group => '_gogios', + mode => '755'; + + file '/tmp/gogios.cron', + ensure => 'file', + content => template( './etc/gogios.cron.tpl', gogios_path => $gogios_path ), + mode => '600'; + + run 'cat /tmp/gogios.cron | crontab -u _gogios -'; + run 'rm /tmp/gogios.cron'; + + append_if_no_such_line '/etc/rc.local', 'if [ ! -d /var/run/gogios ]; then mkdir /var/run/gogios; fi'; + append_if_no_such_line '/etc/rc.local', 'chown _gogios /var/run/gogios'; + }; + +use Rex::Commands::Cron; + +desc 'Cron test'; +task 'cron_test', + group => 'openbsd_canary', + sub { + cron + add => '_gogios', + { + minute => '5', + hour => '*', + command => '/bin/ls', + }; + }; + +desc 'Installing Gorum binary'; +task 'gorum_install', + group => 'frontends', + sub { + file '/usr/local/bin/gorum', + source => 'usr/local/bin/gorum', + mode => '0755'; + owner => 'root', + group => 'root'; + }; + +desc 'Setup Gorum quorum system'; +task 'gorum', + group => 'frontends', + sub { + my $restart = FALSE; + my $gorum_path = '/usr/local/bin/gorum'; + + unless ( is_file($gorum_path) ) { + Rex::Logger::info( "gorum not installed to $gorum_path! Run task 'gorum_install'", 'error' ); + } + + run 'adduser -class nologin -group _gorum -batch _gorum', unless => 'id _gorum'; + run 'usermod -d /var/run/gorum _gorum'; + + file '/etc/gorum.json', + content => template('./etc/gorum.json.tpl'), + owner => 'root', + group => 'wheel', + mode => '744', + on_change => sub { $restart = TRUE }; + + file '/var/run/gorum', + ensure => 'directory', + owner => '_gorum', + group => '_gorum', + mode => '755'; + + file '/etc/rc.d/gorum', + content => template('./etc/rc.d/gorum.tpl'), + owner => 'root', + group => 'wheel', + mode => '755', + on_change => sub { $restart = TRUE }; + + service 'gorum' => 'restart' if $restart; + service 'gorum', ensure => 'started'; + }; + +desc 'Setup Foostats'; +task 'foostats', + group => 'frontends', + sub { + use File::Copy; + for my $file (qw/foostats.pl fooodds.txt/) { + Rex::Logger::info("Dealing with $file"); + my $git_script_path = $ENV{HOME} . '/git/foostats/' . $file; + copy( $git_script_path, './scripts/' . $file ) if -f $git_script_path; + } + + file '/usr/local/bin/foostats.pl', + source => './scripts/foostats.pl', + owner => 'root', + group => 'wheel', + mode => '500'; + + file '/var/www/htdocs/buetow.org/self/foostats/fooodds.txt', + source => './scripts/fooodds.txt', + owner => 'root', + group => 'wheel', + mode => '440'; + + file '/var/www/htdocs/gemtexter/stats.foo.zone', + ensure => 'directory', + owner => 'root', + group => 'wheel', + mode => '755'; + + file '/var/gemini/stats.foo.zone', + ensure => 'directory', + owner => 'root', + group => 'wheel', + mode => '755'; + + append_if_no_such_line '/etc/daily.local', 'perl /usr/local/bin/foostats.pl --parse-logs --replicate --report'; + + my @deps = qw(p5-Digest-SHA3 p5-PerlIO-gzip p5-JSON p5-String-Util p5-LWP-Protocol-https); + pkg $_, ensure => present for @deps; + + # For now, custom syslog config only required for foostats (to keep some logs for longer) + # Later, could move out to a separate task here in the Rexfile. + file '/etc/newsyslog.conf', + source => './etc/newsyslog.conf', + owner => 'root', + group => 'wheel', + mode => '644'; + }; + +desc 'Setup IRC bouncer'; +task 'ircbouncer', + group => 'ircbouncer', + sub { + pkg 'znc', ensure => present; + + # Requires runtime config in /var/znc before it can start. + # => geheim search znc.conf + service 'znc', ensure => 'started'; + }; + +# COMBINED TASKS SECTION + +desc 'Common configs of all hosts'; +task 'commons', + group => 'frontends', + sub { + run_task 'base'; + run_task 'nsd'; + run_task 'nsd_failover'; + run_task 'uptimed'; + run_task 'httpd'; + run_task 'gemtexter'; + run_task 'taskwarrior'; + run_task 'acme'; + run_task 'acme_invoke'; + run_task 'inetd'; + run_task 'relayd'; + run_task 'smtpd'; + run_task 'rsync'; + run_task 'gogios'; + + # run_task 'gorum'; + run_task 'foostats'; + + # Requires installing the binaries first! + #run_task 'dtail'; + }; + +1; + +# vim: syntax=perl diff --git a/gemfeed/examples/conf/frontends/etc/acme-client.conf.tpl b/gemfeed/examples/conf/frontends/etc/acme-client.conf.tpl new file mode 100644 index 00000000..b52f5b0e --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/acme-client.conf.tpl @@ -0,0 +1,41 @@ +# +# $OpenBSD: acme-client.conf,v 1.4 2020/09/17 09:13:06 florian Exp $ +# +authority letsencrypt { + api url "https://acme-v02.api.letsencrypt.org/directory" + account key "/etc/acme/letsencrypt-privkey.pem" +} + +authority letsencrypt-staging { + api url "https://acme-staging-v02.api.letsencrypt.org/directory" + account key "/etc/acme/letsencrypt-staging-privkey.pem" +} + +authority buypass { + api url "https://api.buypass.com/acme/directory" + account key "/etc/acme/buypass-privkey.pem" + contact "mailto:me@example.com" +} + +authority buypass-test { + api url "https://api.test4.buypass.no/acme/directory" + account key "/etc/acme/buypass-test-privkey.pem" + contact "mailto:me@example.com" +} + +<% for my $host (@$acme_hosts) { -%> +<% for my $prefix ('', 'www.', 'standby.') { -%> +domain <%= $prefix.$host %> { + domain key "/etc/ssl/private/<%= $prefix.$host %>.key" + domain full chain certificate "/etc/ssl/<%= $prefix.$host %>.fullchain.pem" + sign with letsencrypt +} +<% } -%> +<% } -%> + +# For the server itself (e.g. TLS, or monitoring) +domain <%= "$hostname.$domain" %> { + domain key "/etc/ssl/private/<%= "$hostname.$domain" %>.key" + domain full chain certificate "/etc/ssl/<%= "$hostname.$domain" %>.fullchain.pem" + sign with letsencrypt +} diff --git a/gemfeed/examples/conf/frontends/etc/dserver/dtail.json.tpl b/gemfeed/examples/conf/frontends/etc/dserver/dtail.json.tpl new file mode 100644 index 00000000..6b96fbad --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/dserver/dtail.json.tpl @@ -0,0 +1,127 @@ +{ + "Client": { + "TermColorsEnable": true, + "TermColors": { + "Remote": { + "DelimiterAttr": "Dim", + "DelimiterBg": "Blue", + "DelimiterFg": "Cyan", + "RemoteAttr": "Dim", + "RemoteBg": "Blue", + "RemoteFg": "White", + "CountAttr": "Dim", + "CountBg": "Blue", + "CountFg": "White", + "HostnameAttr": "Bold", + "HostnameBg": "Blue", + "HostnameFg": "White", + "IDAttr": "Dim", + "IDBg": "Blue", + "IDFg": "White", + "StatsOkAttr": "None", + "StatsOkBg": "Green", + "StatsOkFg": "Black", + "StatsWarnAttr": "None", + "StatsWarnBg": "Red", + "StatsWarnFg": "White", + "TextAttr": "None", + "TextBg": "Black", + "TextFg": "White" + }, + "Client": { + "DelimiterAttr": "Dim", + "DelimiterBg": "Yellow", + "DelimiterFg": "Black", + "ClientAttr": "Dim", + "ClientBg": "Yellow", + "ClientFg": "Black", + "HostnameAttr": "Dim", + "HostnameBg": "Yellow", + "HostnameFg": "Black", + "TextAttr": "None", + "TextBg": "Black", + "TextFg": "White" + }, + "Server": { + "DelimiterAttr": "AttrDim", + "DelimiterBg": "BgCyan", + "DelimiterFg": "FgBlack", + "ServerAttr": "AttrDim", + "ServerBg": "BgCyan", + "ServerFg": "FgBlack", + "HostnameAttr": "AttrBold", + "HostnameBg": "BgCyan", + "HostnameFg": "FgBlack", + "TextAttr": "AttrNone", + "TextBg": "BgBlack", + "TextFg": "FgWhite" + }, + "Common": { + "SeverityErrorAttr": "AttrBold", + "SeverityErrorBg": "BgRed", + "SeverityErrorFg": "FgWhite", + "SeverityFatalAttr": "AttrBold", + "SeverityFatalBg": "BgMagenta", + "SeverityFatalFg": "FgWhite", + "SeverityWarnAttr": "AttrBold", + "SeverityWarnBg": "BgBlack", + "SeverityWarnFg": "FgWhite" + }, + "MaprTable": { + "DataAttr": "AttrNone", + "DataBg": "BgBlue", + "DataFg": "FgWhite", + "DelimiterAttr": "AttrDim", + "DelimiterBg": "BgBlue", + "DelimiterFg": "FgWhite", + "HeaderAttr": "AttrBold", + "HeaderBg": "BgBlue", + "HeaderFg": "FgWhite", + "HeaderDelimiterAttr": "AttrDim", + "HeaderDelimiterBg": "BgBlue", + "HeaderDelimiterFg": "FgWhite", + "HeaderSortKeyAttr": "AttrUnderline", + "HeaderGroupKeyAttr": "AttrReverse", + "RawQueryAttr": "AttrDim", + "RawQueryBg": "BgBlack", + "RawQueryFg": "FgCyan" + } + } + }, + "Server": { + "SSHBindAddress": "0.0.0.0", + "HostKeyFile": "cache/ssh_host_key", + "HostKeyBits": 2048, + "MapreduceLogFormat": "default", + "MaxConcurrentCats": 2, + "MaxConcurrentTails": 50, + "MaxConnections": 50, + "MaxLineLength": 1048576, + "Permissions": { + "Default": [ + "readfiles:^/.*$" + ], + "Users": { + "paul": [ + "readfiles:^/.*$" + ], + "pbuetow": [ + "readfiles:^/.*$" + ], + "jamesblake": [ + "readfiles:^/tmp/foo.log$", + "readfiles:^/.*$", + "readfiles:!^/tmp/bar.log$" + ] + } + } + }, + "Common": { + "LogDir": "/var/log/dserver", + "Logger": "Fout", + "LogRotation": "Daily", + "CacheDir": "cache", + "SSHPort": 2222, + "LogLevel": "Info" + } +} diff --git a/gemfeed/examples/conf/frontends/etc/gogios.cron.tpl b/gemfeed/examples/conf/frontends/etc/gogios.cron.tpl new file mode 100644 index 00000000..fc6299c3 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/gogios.cron.tpl @@ -0,0 +1,3 @@ +0 7 * * * <%= $gogios_path %> -renotify >/dev/null +*/5 8-22 * * * -s <%= $gogios_path %> >/dev/null +0 3 * * 0 <%= $gogios_path %> -force >/dev/null diff --git a/gemfeed/examples/conf/frontends/etc/gogios.json.tpl b/gemfeed/examples/conf/frontends/etc/gogios.json.tpl new file mode 100644 index 00000000..683f9de8 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/gogios.json.tpl @@ -0,0 +1,98 @@ +<% our $plugin_dir = '/usr/local/libexec/nagios'; -%> +{ + "EmailTo": "paul", + "EmailFrom": "gogios@mx.buetow.org", + "CheckTimeoutS": 10, + "CheckConcurrency": 3, + "StateDir": "/var/run/gogios", + "Checks": { + <% for my $host (qw(master standby)) { -%> + <% for my $proto (4, 6) { -%> + "Check Ping<%= $proto %> <%= $host %>.buetow.org": { + "Plugin": "<%= $plugin_dir %>/check_ping", + "Args": ["-H", "<%= $host %>.buetow.org", "-<%= $proto %>", "-w", "100,10%", "-c", "200,15%"], + "Retries": 3, + "RetryInterval": 3 + }, + <% } -%> + <% } -%> + <% for my $host (qw(fishfinger blowfish)) { -%> + "Check DTail <%= $host %>.buetow.org": { + "Plugin": "/usr/local/bin/dtailhealth", + "Args": ["--server", "<%= $host %>.buetow.org:2222"], + "DependsOn": ["Check Ping4 <%= $host %>.buetow.org", "Check Ping6 <%= $host %>.buetow.org"] + }, + <% } -%> + <% for my $host (qw(fishfinger blowfish)) { -%> + <% for my $proto (4, 6) { -%> + "Check Ping<%= $proto %> <%= $host %>.buetow.org": { + "Plugin": "<%= $plugin_dir %>/check_ping", + "Args": ["-H", "<%= $host %>.buetow.org", "-<%= $proto %>", "-w", "100,10%", "-c", "200,15%"], + "Retries": 3, + "RetryInterval": 3 + }, + <% } -%> + "Check TLS Certificate <%= $host %>.buetow.org": { + "Plugin": "<%= $plugin_dir %>/check_http", + "Args": ["--sni", "-H", "<%= $host %>.buetow.org", "-C", "20" ], + "DependsOn": ["Check Ping4 <%= $host %>.buetow.org", "Check Ping6 <%= $host %>.buetow.org"] + }, + <% } -%> + <% for my $host (@$acme_hosts) { -%> + <% for my $prefix ('', 'standby.', 'www.') { -%> + <% my $depends_on = $prefix eq 'standby.' ? 'standby.buetow.org' : 'master.buetow.org'; -%> + "Check TLS Certificate <%= $prefix . $host %>": { + "Plugin": "<%= $plugin_dir %>/check_http", + "Args": ["--sni", "-H", "<%= $prefix . $host %>", "-C", "20" ], + "DependsOn": ["Check Ping4 <%= $depends_on %>", "Check Ping6 <%= $depends_on %>"] + }, + <% for my $proto (4, 6) { -%> + "Check HTTP IPv<%= $proto %> <%= $prefix . $host %>": { + "Plugin": "<%= $plugin_dir %>/check_http", + "Args": ["<%= $prefix . $host %>", "-<%= $proto %>"], + "DependsOn": ["Check Ping<%= $proto %> <%= $depends_on %>"] + }, + <% } -%> + <% } -%> + <% } -%> + <% for my $host (qw(fishfinger blowfish)) { -%> + <% for my $proto (4, 6) { -%> + "Check Dig <%= $host %>.buetow.org IPv<%= $proto %>": { + "Plugin": "<%= $plugin_dir %>/check_dig", + "Args": ["-H", "<%= $host %>.buetow.org", "-l", "buetow.org", "-<%= $proto %>"], + "DependsOn": ["Check Ping<%= $proto %> <%= $host %>.buetow.org"] + }, + "Check SMTP <%= $host %>.buetow.org IPv<%= $proto %>": { + "Plugin": "<%= $plugin_dir %>/check_smtp", + "Args": ["-H", "<%= $host %>.buetow.org", "-<%= $proto %>"], + "DependsOn": ["Check Ping<%= $proto %> <%= $host %>.buetow.org"] + }, + "Check Gemini TCP <%= $host %>.buetow.org IPv<%= $proto %>": { + "Plugin": "<%= $plugin_dir %>/check_tcp", + "Args": ["-H", "<%= $host %>.buetow.org", "-p", "1965", "-<%= $proto %>"], + "DependsOn": ["Check Ping<%= $proto %> <%= $host %>.buetow.org"] + }, + <% } -%> + <% } -%> + "Check Users <%= $hostname %>": { + "Plugin": "<%= $plugin_dir %>/check_users", + "Args": ["-w", "2", "-c", "3"] + }, + "Check SWAP <%= $hostname %>": { + "Plugin": "<%= $plugin_dir %>/check_swap", + "Args": ["-w", "95%", "-c", "90%"] + }, + "Check Procs <%= $hostname %>": { + "Plugin": "<%= $plugin_dir %>/check_procs", + "Args": ["-w", "80", "-c", "100"] + }, + "Check Disk <%= $hostname %>": { + "Plugin": "<%= $plugin_dir %>/check_disk", + "Args": ["-w", "30%", "-c", "10%"] + }, + "Check Load <%= $hostname %>": { + "Plugin": "<%= $plugin_dir %>/check_load", + "Args": ["-w", "2,1,1", "-c", "4,3,3"] + } + } +} diff --git a/gemfeed/examples/conf/frontends/etc/gorum.json.tpl b/gemfeed/examples/conf/frontends/etc/gorum.json.tpl new file mode 100644 index 00000000..247a9dbf --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/gorum.json.tpl @@ -0,0 +1,18 @@ +{ + "StateDir": "/var/run/gorum", + "Address": "<%= $hostname.'.'.$domain %>:4321", + "EmailTo": "", + "EmailFrom": "gorum@mx.buetow.org", + "Nodes": { + "Blowfish": { + "Hostname": "blowfish.buetow.org", + "Port": 4321, + "Priority": 100 + }, + "Fishfinger": { + "Hostname": "fishfinger.buetow.org", + "Port": 4321, + "Priority": 50 + } + } +} diff --git a/gemfeed/examples/conf/frontends/etc/httpd.conf.tpl b/gemfeed/examples/conf/frontends/etc/httpd.conf.tpl new file mode 100644 index 00000000..c3a2764e --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/httpd.conf.tpl @@ -0,0 +1,184 @@ +<% our @prefixes = ('', 'www.', 'standby.'); -%> +# Plain HTTP for ACME and HTTPS redirect +<% for my $host (@$acme_hosts) { for my $prefix (@prefixes) { -%> +server "<%= $prefix.$host %>" { + listen on * port 80 + log style forwarded + location "/.well-known/acme-challenge/*" { + root "/acme" + request strip 2 + } + location * { + block return 302 "https://$HTTP_HOST$REQUEST_URI" + } +} +<% } } -%> + +# Current server's FQDN (e.g. for mail server ACME cert requests) +server "<%= "$hostname.$domain" %>" { + listen on * port 80 + log style forwarded + location "/.well-known/acme-challenge/*" { + root "/acme" + request strip 2 + } + location * { + block return 302 "https://<%= "$hostname.$domain" %>" + } +} + +server "<%= "$hostname.$domain" %>" { + listen on * port 8080 + log style forwarded + location * { + root "/htdocs/buetow.org/self" + directory auto index + } +} + +# Gemtexter hosts +<% for my $host (qw/foo.zone stats.foo.zone/) { for my $prefix (@prefixes) { -%> +server "<%= $prefix.$host %>" { + listen on * port 8080 + log style forwarded + location "/.git*" { + block return 302 "https://<%= $prefix.$host %>" + } + location * { + <% if ($prefix eq 'www.') { -%> + block return 302 "https://<%= $host %>$REQUEST_URI" + <% } else { -%> + root "/htdocs/gemtexter/<%= $host %>" + directory auto index + <% } -%> + } +} +<% } } -%> + +# Redirect to paul.buetow.org +<% for my $prefix (@prefixes) { -%> +server "<%= $prefix %>buetow.org" { + listen on * port 8080 + log style forwarded + location * { + block return 302 "https://paul.buetow.org$REQUEST_URI" + } +} + +# Redirect blog to foo.zone +server "<%= $prefix %>blog.buetow.org" { + listen on * port 8080 + log style forwarded + location * { + block return 302 "https://foo.zone$REQUEST_URI" + } +} + +server "<%= $prefix %>snonux.foo" { + listen on * port 8080 + log style forwarded + location * { + block return 302 "https://foo.zone/about$REQUEST_URI" + } +} + +server "<%= $prefix %>paul.buetow.org" { + listen on * port 8080 + log style forwarded + location * { + block return 302 "https://foo.zone/about$REQUEST_URI" + } +} +<% } -%> + +# Redirect to gitub.dtail.dev +<% for my $prefix (@prefixes) { -%> +server "<%= $prefix %>dtail.dev" { + listen on * port 8080 + log style forwarded + location * { + block return 302 "https://github.dtail.dev$REQUEST_URI" + } +} +<% } -%> + +# Irregular Ninja special hosts +<% for my $prefix (@prefixes) { -%> +server "<%= $prefix %>irregular.ninja" { + listen on * port 8080 + log style forwarded + location * { + root "/htdocs/irregular.ninja" + directory auto index + } +} +<% } -%> + +<% for my $prefix (@prefixes) { -%> +server "<%= $prefix %>alt.irregular.ninja" { + listen on * port 8080 + log style forwarded + location * { + root "/htdocs/alt.irregular.ninja" + directory auto index + } +} +<% } -%> + +# joern special host +<% for my $prefix (@prefixes) { -%> +server "<%= $prefix %>joern.buetow.org" { + listen on * port 8080 + log style forwarded + location * { + root "/htdocs/joern/" + directory auto index + } +} +<% } -%> + +# Dory special host +<% for my $prefix (@prefixes) { -%> +server "<%= $prefix %>dory.buetow.org" { + listen on * port 8080 + log style forwarded + location * { + root "/htdocs/joern/dory.buetow.org" + directory auto index + } +} +<% } -%> + +# ecat special host +<% for my $prefix (@prefixes) { -%> +server "<%= $prefix %>ecat.buetow.org" { + listen on * port 8080 + log style forwarded + location * { + root "/htdocs/joern/ecat.buetow.org" + directory auto index + } +} +<% } -%> + +<% for my $prefix (@prefixes) { -%> +server "<%= $prefix %>fotos.buetow.org" { + listen on * port 8080 + log style forwarded + root "/htdocs/buetow.org/fotos" + directory auto index +} +<% } -%> + +# Defaults +server "default" { + listen on * port 80 + log style forwarded + block return 302 "https://foo.zone$REQUEST_URI" +} + +server "default" { + listen on * port 8080 + log style forwarded + block return 302 "https://foo.zone$REQUEST_URI" +} diff --git a/gemfeed/examples/conf/frontends/etc/inetd.conf b/gemfeed/examples/conf/frontends/etc/inetd.conf new file mode 100644 index 00000000..13163877 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/inetd.conf @@ -0,0 +1,2 @@ +127.0.0.1:11965 stream tcp nowait www /usr/local/bin/vger vger -v +rsync stream tcp nowait root /usr/local/bin/rsync rsyncd --daemon diff --git a/gemfeed/examples/conf/frontends/etc/login.conf.d/inetd b/gemfeed/examples/conf/frontends/etc/login.conf.d/inetd new file mode 100644 index 00000000..c8620c41 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/login.conf.d/inetd @@ -0,0 +1,3 @@ +inetd:\ + :maxproc=10:\ + :tc=daemon: diff --git a/gemfeed/examples/conf/frontends/etc/mail/aliases b/gemfeed/examples/conf/frontends/etc/mail/aliases new file mode 100644 index 00000000..91bf1d06 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/mail/aliases @@ -0,0 +1,103 @@ +# +# $OpenBSD: aliases,v 1.68 2020/01/24 06:17:37 tedu Exp $ +# +# Aliases in this file will NOT be expanded in the header from +# Mail, but WILL be visible over networks or from /usr/libexec/mail.local. +# +# >>>>>>>>>> The program "newaliases" must be run after +# >> NOTE >> this file is updated for any changes to +# >>>>>>>>>> show through to smtpd. +# + +# Basic system aliases -- these MUST be present +MAILER-DAEMON: postmaster +postmaster: root + +# General redirections for important pseudo accounts +daemon: root +ftp-bugs: root +operator: root +www: root +admin: root + +# Redirections for pseudo accounts that should not receive mail +_bgpd: /dev/null +_dhcp: /dev/null +_dpb: /dev/null +_dvmrpd: /dev/null +_eigrpd: /dev/null +_file: /dev/null +_fingerd: /dev/null +_ftp: /dev/null +_hostapd: /dev/null +_identd: /dev/null +_iked: /dev/null +_isakmpd: /dev/null +_iscsid: /dev/null +_ldapd: /dev/null +_ldpd: /dev/null +_mopd: /dev/null +_nsd: /dev/null +_ntp: /dev/null +_ospfd: /dev/null +_ospf6d: /dev/null +_pbuild: /dev/null +_pfetch: /dev/null +_pflogd: /dev/null +_ping: /dev/null +_pkgfetch: /dev/null +_pkguntar: /dev/null +_portmap: /dev/null +_ppp: /dev/null +_rad: /dev/null +_radiusd: /dev/null +_rbootd: /dev/null +_relayd: /dev/null +_ripd: /dev/null +_rstatd: /dev/null +_rusersd: /dev/null +_rwalld: /dev/null +_smtpd: /dev/null +_smtpq: /dev/null +_sndio: /dev/null +_snmpd: /dev/null +_spamd: /dev/null +_switchd: /dev/null +_syslogd: /dev/null +_tcpdump: /dev/null +_traceroute: /dev/null +_tftpd: /dev/null +_unbound: /dev/null +_unwind: /dev/null +_vmd: /dev/null +_x11: /dev/null +_ypldap: /dev/null +bin: /dev/null +build: /dev/null +nobody: /dev/null +_tftp_proxy: /dev/null +_ftp_proxy: /dev/null +_sndiop: /dev/null +_syspatch: /dev/null +_slaacd: /dev/null +sshd: /dev/null + +# Well-known aliases -- these should be filled in! +root: paul +manager: root +dumper: root + +# RFC 2142: NETWORK OPERATIONS MAILBOX NAMES +abuse: root +noc: root +security: root + +# RFC 2142: SUPPORT MAILBOX NAMES FOR SPECIFIC INTERNET SERVICES +hostmaster: root +# usenet: root +# news: usenet +webmaster: root +# ftp: root + +paul: paul.buetow@protonmail.com +albena: albena.buetow@protonmail.com diff --git a/gemfeed/examples/conf/frontends/etc/mail/smtpd.conf.tpl b/gemfeed/examples/conf/frontends/etc/mail/smtpd.conf.tpl new file mode 100644 index 00000000..7764b345 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/mail/smtpd.conf.tpl @@ -0,0 +1,23 @@ +# This is the smtpd server system-wide configuration file. +# See smtpd.conf(5) for more information. + +# I used https://www.checktls.com/TestReceiver for testing. + +pki "buetow_org_tls" cert "/etc/ssl/<%= "$hostname.$domain" %>.fullchain.pem" +pki "buetow_org_tls" key "/etc/ssl/private/<%= "$hostname.$domain" %>.key" + +table aliases file:/etc/mail/aliases +table virtualdomains file:/etc/mail/virtualdomains +table virtualusers file:/etc/mail/virtualusers + +listen on socket +listen on all tls pki "buetow_org_tls" hostname "<%= "$hostname.$domain" %>" +#listen on all + +action localmail mbox alias <aliases> +action receive mbox virtual <virtualusers> +action outbound relay + +match from any for domain <virtualdomains> action receive +match from local for local action localmail +match from local for any action outbound diff --git a/gemfeed/examples/conf/frontends/etc/mail/virtualdomains b/gemfeed/examples/conf/frontends/etc/mail/virtualdomains new file mode 100644 index 00000000..b59554ac --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/mail/virtualdomains @@ -0,0 +1,20 @@ +buetow.org +paul.buetow.org +mx.buetow.org +de.buetow.org +bg.buetow.org +uk.buetow.org +us.buetow.org +es.buetow.org +dev.buetow.org +oss.buetow.org +ex.buetow.org +xxx.buetow.org +newsletter.buetow.org +gadgets.buetow.org +orders.buetow.org +nospam.buetow.org +snonux.foo +dtail.dev +foo.zone +paul.cyou diff --git a/gemfeed/examples/conf/frontends/etc/mail/virtualusers b/gemfeed/examples/conf/frontends/etc/mail/virtualusers new file mode 100644 index 00000000..6cfac58b --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/mail/virtualusers @@ -0,0 +1,5 @@ +albena@buetow.org albena.buetow@protonmail.com +joern@buetow.org df2hbradio@gmail.com +dory@buetow.org df2hbradio@gmail.com +ecat@buetow.org df2hbradio@gmail.com +@ paul.buetow@protonmail.com diff --git a/gemfeed/examples/conf/frontends/etc/myname.tpl b/gemfeed/examples/conf/frontends/etc/myname.tpl new file mode 100644 index 00000000..dcd4ca04 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/myname.tpl @@ -0,0 +1 @@ +<%= $fqdns->($vio0_ip) %> diff --git a/gemfeed/examples/conf/frontends/etc/newsyslog.conf b/gemfeed/examples/conf/frontends/etc/newsyslog.conf new file mode 100644 index 00000000..bbd1aa55 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/newsyslog.conf @@ -0,0 +1,14 @@ +# logfile_name owner:group mode count size when flags +/var/cron/log root:wheel 600 3 10 * Z +/var/log/authlog root:wheel 640 7 * 168 Z +/var/log/daemon 640 14 300 * Z +/var/log/lpd-errs 640 7 10 * Z +/var/log/maillog 640 7 * 24 Z +/var/log/messages 644 5 300 * Z +/var/log/secure 600 7 * 168 Z +/var/log/wtmp 644 7 * $M1D4 B "" +/var/log/xferlog 640 7 250 * Z +/var/log/pflog 600 3 250 * ZB "pkill -HUP -u root -U root -t - -x pflogd" +/var/www/logs/access.log 644 14 * $W0 Z "pkill -USR1 -u root -U root -x httpd" +/var/www/logs/error.log 644 7 250 * Z "pkill -USR1 -u root -U root -x httpd" +/var/log/fooodds 640 7 300 * Z diff --git a/gemfeed/examples/conf/frontends/etc/rc.conf.local b/gemfeed/examples/conf/frontends/etc/rc.conf.local new file mode 100644 index 00000000..842f16d7 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/rc.conf.local @@ -0,0 +1,5 @@ +httpd_flags= +inetd_flags= +nsd_flags= +pkg_scripts="uptimed httpd" +relayd_flags= diff --git a/gemfeed/examples/conf/frontends/etc/rc.d/dserver.tpl b/gemfeed/examples/conf/frontends/etc/rc.d/dserver.tpl new file mode 100755 index 00000000..aec80f54 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/rc.d/dserver.tpl @@ -0,0 +1,16 @@ +#!/bin/ksh + +daemon="/usr/local/bin/dserver" +daemon_flags="-cfg /etc/dserver/dtail.json" +daemon_user="_dserver" + +. /etc/rc.d/rc.subr + +rc_reload=NO + +rc_pre() { + install -d -o _dserver /var/log/dserver + install -d -o _dserver /var/run/dserver/cache +} + +rc_cmd $1 & diff --git a/gemfeed/examples/conf/frontends/etc/rc.d/gorum.tpl b/gemfeed/examples/conf/frontends/etc/rc.d/gorum.tpl new file mode 100755 index 00000000..3b4f403d --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/rc.d/gorum.tpl @@ -0,0 +1,16 @@ +#!/bin/ksh + +daemon="/usr/local/bin/gorum" +daemon_flags="-cfg /etc/gorum.json" +daemon_user="_gorum" +daemon_logger="daemon.info" + +. /etc/rc.d/rc.subr + +rc_reload=NO + +rc_pre() { + install -d -o _gorum /var/log/gorum +} + +rc_cmd $1 & diff --git a/gemfeed/examples/conf/frontends/etc/relayd.conf.tpl b/gemfeed/examples/conf/frontends/etc/relayd.conf.tpl new file mode 100644 index 00000000..1900c0bf --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/relayd.conf.tpl @@ -0,0 +1,86 @@ +<% our @prefixes = ('', 'www.', 'standby.'); -%> +log connection + +# Wireguard endpoints of the k3s cluster nodes running in FreeBSD bhyve Linux VMs via Wireguard tunnels +table <f3s> { + 192.168.2.120 + 192.168.2.121 + 192.168.2.122 +} + +# Same backends, separate table for registry service on port 30001 +table <f3s_registry> { + 192.168.2.120 + 192.168.2.121 + 192.168.2.122 +} + +# Local OpenBSD httpd +table <localhost> { + 127.0.0.1 + ::1 +} + +http protocol "https" { + <% for my $host (@$acme_hosts) { for my $prefix (@prefixes) { -%> + tls keypair <%= $prefix.$host -%> + <% } } -%> + tls keypair <%= $hostname.'.'.$domain -%> + + match request header set "X-Forwarded-For" value "$REMOTE_ADDR" + match request header set "X-Forwarded-Proto" value "https" + + # WebSocket support for audiobookshelf + pass header "Connection" + pass header "Upgrade" + pass header "Sec-WebSocket-Key" + pass header "Sec-WebSocket-Version" + pass header "Sec-WebSocket-Extensions" + pass header "Sec-WebSocket-Protocol" + + <% for my $host (@$f3s_hosts) { for my $prefix (@prefixes) { -%> + <% if ($host eq 'registry.f3s.buetow.org') { -%> + match request quick header "Host" value "<%= $prefix.$host -%>" forward to <f3s_registry> + <% } else { -%> + match request quick header "Host" value "<%= $prefix.$host -%>" forward to <f3s> + <% } } } -%> +} + +relay "https4" { + listen on <%= $vio0_ip %> port 443 tls + protocol "https" + forward to <localhost> port 8080 + forward to <f3s_registry> port 30001 check tcp + forward to <f3s> port 80 check tcp +} + +relay "https6" { + listen on <%= $ipv6address->($hostname) %> port 443 tls + protocol "https" + forward to <localhost> port 8080 + forward to <f3s_registry> port 30001 check tcp + forward to <f3s> port 80 check tcp +} + +tcp protocol "gemini" { + tls keypair foo.zone + tls keypair stats.foo.zone + tls keypair snonux.foo + tls keypair paul.buetow.org + tls keypair standby.foo.zone + tls keypair standby.stats.foo.zone + tls keypair standby.snonux.foo + tls keypair standby.paul.buetow.org +} + +relay "gemini4" { + listen on <%= $vio0_ip %> port 1965 tls + protocol "gemini" + forward to 127.0.0.1 port 11965 +} + +relay "gemini6" { + listen on <%= $ipv6address->($hostname) %> port 1965 tls + protocol "gemini" + forward to 127.0.0.1 port 11965 +} diff --git a/gemfeed/examples/conf/frontends/etc/rsyncd.conf.tpl b/gemfeed/examples/conf/frontends/etc/rsyncd.conf.tpl new file mode 100644 index 00000000..e9fe3cf8 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/rsyncd.conf.tpl @@ -0,0 +1,28 @@ +<% my $allow = '*.wg0.wan.buetow.org,*.wg0,localhost'; %> +max connections = 5 +timeout = 300 + +[joernshtdocs] +comment = Joerns htdocs +path = /var/www/htdocs/joern +read only = yes +list = yes +uid = www +gid = www +hosts allow = <%= $allow %> + +# [publicgemini] +# comment = Public Gemini capsule content +# path = /var/gemini +# read only = yes +# list = yes +# uid = www +# gid = www +# hosts allow = <%= $allow %> + +# [sslcerts] +# comment = TLS certificates +# path = /etc/ssl +# read only = yes +# list = yes +# hosts allow = <%= $allow %> diff --git a/gemfeed/examples/conf/frontends/etc/taskrc.tpl b/gemfeed/examples/conf/frontends/etc/taskrc.tpl new file mode 100644 index 00000000..ed97d385 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/taskrc.tpl @@ -0,0 +1,40 @@ +# [Created by task 2.6.2 7/9/2023 20:52:31] +# Taskwarrior program configuration file. +# For more documentation, see https://taskwarrior.org or try 'man task', 'man task-color', +# 'man task-sync' or 'man taskrc' + +# Here is an example of entries that use the default, override and blank values +# variable=foo -- By specifying a value, this overrides the default +# variable= -- By specifying no value, this means no default +# #variable=foo -- By commenting out the line, or deleting it, this uses the default + +# You can also refence environment variables: +# variable=$HOME/task +# variable=$VALUE + +# Use the command 'task show' to see all defaults and overrides + +# Files +data.location=/home/git/.task + +# To use the default location of the XDG directories, +# move this configuration file from ~/.taskrc to ~/.config/task/taskrc and uncomment below + +#data.location=~/.local/share/task +#hooks.location=~/.config/task/hooks + +# Color theme (uncomment one to use) +#include light-16.theme +#include light-256.theme +#include dark-16.theme +#include dark-256.theme +#include dark-red-256.theme +#include dark-green-256.theme +#include dark-blue-256.theme +#include dark-violets-256.theme +#include dark-yellow-green.theme +#include dark-gray-256.theme +#include dark-gray-blue-256.theme +#include solarized-dark-256.theme +#include solarized-light-256.theme +#include no-color.theme diff --git a/gemfeed/examples/conf/frontends/etc/tmux.conf b/gemfeed/examples/conf/frontends/etc/tmux.conf new file mode 100644 index 00000000..14493260 --- /dev/null +++ b/gemfeed/examples/conf/frontends/etc/tmux.conf @@ -0,0 +1,24 @@ +set-option -g allow-rename off +set-option -g default-terminal "screen-256color" +set-option -g history-limit 100000 +set-option -g status-bg '#444444' +set-option -g status-fg '#ffa500' + +set-window-option -g mode-keys vi + +bind-key h select-pane -L +bind-key j select-pane -D +bind-key k select-pane -U +bind-key l select-pane -R + +bind-key H resize-pane -L 5 +bind-key J resize-pane -D 5 +bind-key K resize-pane -U 5 +bind-key L resize-pane -R 5 + +bind-key b break-pane -d +bind-key c new-window -c '#{pane_current_path}' +bind-key p setw synchronize-panes off +bind-key P setw synchronize-panes on +bind-key r source-file ~/.tmux.conf \; display-message "~/.tmux.conf reloaded" +bind-key T choose-tree diff --git a/gemfeed/examples/conf/frontends/scripts/acme.sh.tpl b/gemfeed/examples/conf/frontends/scripts/acme.sh.tpl new file mode 100644 index 00000000..8d306092 --- /dev/null +++ b/gemfeed/examples/conf/frontends/scripts/acme.sh.tpl @@ -0,0 +1,68 @@ +#!/bin/sh + +MY_IP=`ifconfig vio0 | awk '$1 == "inet" { print $2 }'` + +# New hosts may not have a cert, just copy foo.zone as a +# placeholder, so that services can at least start proprely. +# cert will be updated with next acme-client runs! +ensure_placeholder_cert () { + host=$1 + copy_from=foo.zone + + if [ ! -f /etc/ssl/$host.crt ]; then + cp -v /etc/ssl/$copy_from.crt /etc/ssl/$host.crt + cp -v /etc/ssl/$copy_from.fullchain.pem /etc/ssl/$host.fullchain.pem + cp -v /etc/ssl/private/$copy_from.key /etc/ssl/private/$host.key + fi +} + +handle_cert () { + host=$1 + host_ip=`host $host | awk '/has address/ { print $(NF) }'` + + grep -q "^server \"$host\"" /etc/httpd.conf + if [ $? -ne 0 ]; then + echo "Host $host not configured in httpd, skipping..." + return + fi + ensure_placeholder_cert "$host" + + if [ "$MY_IP" != "$host_ip" ]; then + echo "Not serving $host, skipping..." + return + fi + + # Create symlink, so that relayd also can read it. + crt_path=/etc/ssl/$host + if [ -e $crt_path.crt ]; then + rm $crt_path.crt + fi + ln -s $crt_path.fullchain.pem $crt_path.crt + # Requesting and renewing certificate. + /usr/sbin/acme-client -v $host +} + +has_update=no +<% for my $host (@$acme_hosts) { -%> +<% for my $prefix ('', 'www.', 'standby.') { -%> +handle_cert <%= $prefix.$host %> +if [ $? -eq 0 ]; then + has_update=yes +fi +<% } -%> +<% } -%> + +# Current server's FQDN (e.g. for mail server certs) +handle_cert <%= "$hostname.$domain" %> +if [ $? -eq 0 ]; then + has_update=yes +fi + +# Pick up the new certs. +if [ $has_update = yes ]; then + # TLS offloading fully moved to relayd now + # /usr/sbin/rcctl reload httpd + + /usr/sbin/rcctl reload relayd + /usr/sbin/rcctl restart smtpd +fi diff --git a/gemfeed/examples/conf/frontends/scripts/dns-failover.ksh b/gemfeed/examples/conf/frontends/scripts/dns-failover.ksh new file mode 100644 index 00000000..dfc24ee3 --- /dev/null +++ b/gemfeed/examples/conf/frontends/scripts/dns-failover.ksh @@ -0,0 +1,133 @@ +#!/bin/ksh + +ZONES_DIR=/var/nsd/zones/master/ +DEFAULT_MASTER=fishfinger.buetow.org +DEFAULT_STANDBY=blowfish.buetow.org + +determine_master_and_standby () { + local master=$DEFAULT_MASTER + local standby=$DEFAULT_STANDBY + + # Weekly auto-failover for Let's Encrypt automation + local -i -r week_of_the_year=$(date +%U) + if [ $(( week_of_the_year % 2 )) -ne 0 ]; then + local tmp=$master + master=$standby + standby=$tmp + fi + + local -i health_ok=1 + if ! ftp -4 -o - https://$master/index.txt | grep -q "Welcome to $master"; then + echo "https://$master/index.txt IPv4 health check failed" + health_ok=0 + elif ! ftp -6 -o - https://$master/index.txt | grep -q "Welcome to $master"; then + echo "https://$master/index.txt IPv6 health check failed" + health_ok=0 + fi + + if [ $health_ok -eq 0 ]; then + local tmp=$master + master=$standby + standby=$tmp + fi + + echo "Master is $master, standby is $standby" + + host $master | awk '/has address/ { print $(NF) }' >/var/nsd/run/master_a + host $master | awk '/has IPv6 address/ { print $(NF) }' >/var/nsd/run/master_aaaa + host $standby | awk '/has address/ { print $(NF) }' >/var/nsd/run/standby_a + host $standby | awk '/has IPv6 address/ { print $(NF) }' >/var/nsd/run/standby_aaaa +} + +transform () { + sed -E ' + /IN A .*; Enable failover/ { + /^standby/! { + s/^(.*) 300 IN A (.*) ; (.*)/\1 300 IN A '$(cat /var/nsd/run/master_a)' ; \3/; + } + /^standby/ { + s/^(.*) 300 IN A (.*) ; (.*)/\1 300 IN A '$(cat /var/nsd/run/standby_a)' ; \3/; + } + } + /IN AAAA .*; Enable failover/ { + /^standby/! { + s/^(.*) 300 IN AAAA (.*) ; (.*)/\1 300 IN AAAA '$(cat /var/nsd/run/master_aaaa)' ; \3/; + } + /^standby/ { + s/^(.*) 300 IN AAAA (.*) ; (.*)/\1 300 IN AAAA '$(cat /var/nsd/run/standby_aaaa)' ; \3/; + } + } + / ; serial/ { + s/^( +) ([0-9]+) .*; (.*)/\1 '$(date +%s)' ; \3/; + } + ' +} + +zone_is_ok () { + local -r zone=$1 + local -r domain=${zone%.zone} + dig $domain @localhost | grep -q "$domain.*IN.*NS" +} + +failover_zone () { + local -r zone_file=$1 + local -r zone=$(basename $zone_file) + + # Race condition (e.g. script execution abored in the middle previous run) + if [ -f $zone_file.bak ]; then + mv $zone_file.bak $zone_file + fi + + cat $zone_file | transform > $zone_file.new.tmp + + grep -v ' ; serial' $zone_file.new.tmp > $zone_file.new.noserial.tmp + grep -v ' ; serial' $zone_file > $zone_file.old.noserial.tmp + + echo "Has zone $zone_file changed?" + if diff -u $zone_file.old.noserial.tmp $zone_file.new.noserial.tmp; then + echo "The zone $zone_file hasn't changed" + rm $zone_file.*.tmp + return 0 + fi + + cp $zone_file $zone_file.bak + mv $zone_file.new.tmp $zone_file + rm $zone_file.*.tmp + echo "Reloading nsd" + nsd-control reload + + if ! zone_is_ok $zone; then + echo "Rolling back $zone_file changes" + cp $zone_file $zone_file.invalid + mv $zone_file.bak $zone_file + echo "Reloading nsd" + nsd-control reload + zone_is_ok $zone + return 3 + fi + + for cleanup in invalid bak; do + if [ -f $zone_file.$cleanup ]; then + rm $zone_file.$cleanup + fi + done + + echo "Failover of zone $zone to $MASTER completed" + return 1 +} + +main () { + determine_master_and_standby + + local -i ec=0 + for zone_file in $ZONES_DIR/*.zone; do + if ! failover_zone $zone_file; then + ec=1 + fi + done + + # ec other than 0: CRON will send out an E-Mail. + exit $ec +} + +main diff --git a/gemfeed/examples/conf/frontends/scripts/dserver-update-key-cache.sh.tpl b/gemfeed/examples/conf/frontends/scripts/dserver-update-key-cache.sh.tpl new file mode 100644 index 00000000..86b5ecf9 --- /dev/null +++ b/gemfeed/examples/conf/frontends/scripts/dserver-update-key-cache.sh.tpl @@ -0,0 +1,34 @@ +#!/bin/ksh + +CACHEDIR=/var/run/dserver/cache +DSERVER_USER=_dserver +DSERVER_GROUP=_dserver + +echo 'Updating SSH key cache' + +ls /home/ | while read remoteuser; do + keysfile=/home/$remoteuser/.ssh/authorized_keys + + if [ -f $keysfile ]; then + cachefile=$CACHEDIR/$remoteuser.authorized_keys + echo "Caching $keysfile -> $cachefile" + + cp $keysfile $cachefile + chown $DSERVER_USER:$DSERVER_GROUP $cachefile + chmod 600 $cachefile + fi +done + +# Cleanup obsolete public SSH keys +find $CACHEDIR -name \*.authorized_keys -type f | +while read cachefile; do + remoteuser=$(basename $cachefile | cut -d. -f1) + keysfile=/home/$remoteuser/.ssh/authorized_keys + + if [ ! -f $keysfile ]; then + echo 'Deleting obsolete cache file $cachefile' + rm $cachefile + fi +done + +echo 'All set...' diff --git a/gemfeed/examples/conf/frontends/scripts/fooodds.txt b/gemfeed/examples/conf/frontends/scripts/fooodds.txt new file mode 100644 index 00000000..0e08bdd1 --- /dev/null +++ b/gemfeed/examples/conf/frontends/scripts/fooodds.txt @@ -0,0 +1,191 @@ +% ++ +.. +/actuator +/actuator/health +/admin +/ajax +alfacgiapi +/ALFA_DATA +/api +/apply.cgi +/ARest1.exe +.asp +/aspera +/assets +/audiobookshelf +/auth +/autodiscover +/.aws +/bac +/back +/backup +/bak +/base +/.bash_history +/bf +/bin +/bin/sh +/bk +/bkp +/blog +/blurs +/boaform +/boafrm +/.bod +/Br7q +/british-airways +/buetow.org.zip +/buetow.zip +/burodecredito +/c +/.cache +/ccaguardians +/cdn-cgi +/centralbankthailand +/cfdump.packetsdatabase.com +/charlesbridge +/check.txt +/cimtechsolutions +/.circleci +/c/k2 +/ckfinder +/client.zip +/cloud-config.yml +/cloudflare.com +/clssettlement +/cmd,/simZysh/register_main/setCookie +/cn/cmd +/codeberg +/CODE_OF_CONDUCT.md +/columbiagas +/common_page +/comp +/concerto +/config +/config.json +/config.xml +/Config.xml +/config.yaml +/config.yml +/connectivitycheck.gstatic.com +/connector.sds +/console +/contact-information.html +/contact-us +/containers +/CONTRIBUTING.md +/credentials.txt +/crivo +/current_config +/cwservices +/daAV +/dana-cached +/dana-na +/database_backup.sql +/.database.bak +/database.sql +/data.zip +/db +/debug +/debug.cgi +/decoherence-is-just-realizing-this +/demo +/developmentserver +/directory.gz +/directory.tar +/directory.zip +/dir.html +/DnHb +/dns-query +docker-compose +/docker-compose.yml +/?document=images +/Dorybau2.html +/Dorybau.html +/dory.buetow.org +/download +/DpbF +/druid +/dtail.dev.gz +/dtail.dev.sql +/dtail.dev.tar.gz +/dtail.dev.zip +/dtail.html +/dtail.zip +/dump.sql +/dvQ1 +/dvr/cmd +/edualy-shammin +/ekggho +.env +/epa +/etc +/eW9h +/ews +/F3to +/f3Yk +/fahrzeugtechnik.fh-joanneum.at +/failedbythefos +/features +/federalhomeloanbankofdesmoines +/fhir +/fhir-server +/file-manager +/files +/files.zip +/firstfinancial +/flash +/flower +/foostats +/footlocker +/foo.zip +/foo.zone.bz2 +/foozone.webp +/foo.zone.zip +/form.html +/freeze.na4u.ru +/frontend.zip +/ftpsync.settings +/full_backup.zip +/FvwmRearrange.png +/gdb.pdf +/geoserver +.git +/git-guides +/global-protect +/gm-donate.net +/GMUs +/goform +/google.com +/GoRU +/GponForm +/helpdesk +/high-noise-level-for-that-earth-day-with-colors-gay +/his-viewpoint-is-not-economics-until-they-harden +/hN6p +HNAP1 +/hp +/_ignition +jndi:ldap +.js +.lua +microsoft.exchange +/owa/ +.php +/phpinfo +phpunit +/portal/redlion +/_profiler +.rar +/RDWeb +robots.txt +/SDK +/sitemap.xml +/sites +.sql +/ueditor +/vendor +@vite +wordpress +/wp diff --git a/gemfeed/examples/conf/frontends/scripts/foostats.pl b/gemfeed/examples/conf/frontends/scripts/foostats.pl new file mode 100644 index 00000000..a440d941 --- /dev/null +++ b/gemfeed/examples/conf/frontends/scripts/foostats.pl @@ -0,0 +1,1910 @@ +#!/usr/bin/perl + +use v5.38; + +# Those are enabled automatically now w/ this version of Perl +# use strict; +# use warnings; + +use builtin qw(true false); +use experimental qw(builtin); + +use feature qw(refaliasing); +no warnings qw(experimental::refaliasing); + +# Debugging aids like diagnostics are noisy in production. +# Removed per review: enable locally when debugging only. + +use constant VERSION => 'v0.1.0'; + +# Package: FileHelper — small file/JSON helpers +# - Purpose: Atomic writes, gzip JSON read/write, and line reading. +# - Notes: Dies on I/O errors; JSON encoding uses core JSON. +package FileHelper { + use JSON; + + # Sub: write + # - Purpose: Atomic write to a file via "$path.tmp" and rename. + # - Params: $path (str) destination; $content (str) contents to write. + # - Return: undef; dies on failure. + sub write ($path, $content) { + open my $fh, '>', "$path.tmp" or die "\nCannot open file: $!"; + print $fh $content; + close $fh; + rename "$path.tmp", $path; + } + + # Sub: write_json_gz + # - Purpose: JSON-encode $data and write it gzipped atomically. + # - Params: $path (str) destination path; $data (ref/scalar) Perl data. + # - Return: undef; dies on failure. + sub write_json_gz ($path, $data) { + my $json = encode_json $data; + + say "Writing $path"; + open my $fd, '>:gzip', "$path.tmp" or die "$path.tmp: $!"; + print $fd $json; + close $fd; + + rename "$path.tmp", $path or die "$path.tmp: $!"; + } + + # Sub: read_json_gz + # - Purpose: Read a gzipped JSON file and decode to Perl data. + # - Params: $path (str) path to .json.gz file. + # - Return: Perl data structure. + sub read_json_gz ($path) { + say "Reading $path"; + open my $fd, '<:gzip', $path or die "$path: $!"; + my $json = decode_json <$fd>; + close $fd; + return $json; + } + + # Sub: read_lines + # - Purpose: Slurp file lines and chomp newlines. + # - Params: $path (str) file path. + # - Return: list of lines (no trailing newlines). + sub read_lines ($path) { + my @lines; + open(my $fh, '<', $path) or die "$path: $!"; + chomp(@lines = <$fh>); + close($fh); + return @lines; + } +} + +# Package: DateHelper — date range helpers +# - Purpose: Produce date strings used for report windows. +# - Format: Dates are returned as YYYYMMDD strings. +package DateHelper { + use Time::Piece; + + # Sub: last_month_dates + # - Purpose: Return dates for today back to 30 days ago (inclusive). + # - Params: none. + # - Return: list of YYYYMMDD strings, newest first. + sub last_month_dates () { + my $today = localtime; + my @dates; + + for my $days_ago (1 .. 31) { + my $date = $today - ($days_ago * 24 * 60 * 60); + push @dates, $date->strftime('%Y%m%d'); + } + + return @dates; + } + +} + +# Package: Foostats::Logreader — parse and normalize logs +# - Purpose: Read web and gemini logs, anonymize IPs, and emit normalized events. +# - Output Event: { proto, host, ip_hash, ip_proto, date, time, uri_path, status } +package Foostats::Logreader { + use Digest::SHA3 'sha3_512_base64'; + use File::stat; + use PerlIO::gzip; + use Time::Piece; + use String::Util qw(contains startswith endswith); + + # Make log locations configurable (env overrides) to enable testing. + # Sub: gemini_logs_glob + # - Purpose: Glob for gemini-related logs; env override for testing. + # - Return: glob pattern string. + sub gemini_logs_glob { $ENV{FOOSTATS_GEMINI_LOGS_GLOB} // '/var/log/daemon*' } + + # Sub: web_logs_glob + # - Purpose: Glob for web access logs; env override for testing. + # - Return: glob pattern string. + sub web_logs_glob { $ENV{FOOSTATS_WEB_LOGS_GLOB} // '/var/www/logs/access.log*' } + + # Sub: anonymize_ip + # - Purpose: Classify IPv4/IPv6 and map IP to a stable SHA3-512 base64 hash. + # - Params: $ip (str) source IP. + # - Return: ($hash, $proto) where $proto is 'IPv4' or 'IPv6'. + sub anonymize_ip ($ip) { + my $ip_proto = contains($ip, ':') ? 'IPv6' : 'IPv4'; + my $ip_hash = sha3_512_base64 $ip; + return ($ip_hash, $ip_proto); + } + + # Sub: read_lines + # - Purpose: Iterate files matching glob by age; invoke $cb for each line. + # - Params: $glob (str) file glob; $cb (code) callback ($year, @fields). + # - Return: undef; stops early if callback returns undef for a file. + sub read_lines ($glob, $cb) { + my sub year ($path) { + localtime((stat $path)->mtime)->strftime('%Y'); + } + + my sub open_file ($path) { + my $flag = $path =~ /\.gz$/ ? '<:gzip' : '<'; + open my $fd, $flag, $path or die "$path: $!"; + return $fd; + } + + my $last = false; + say 'File path glob matches: ' . join(' ', glob $glob); + + LAST: + for my $path (sort { -M $a <=> -M $b } glob $glob) { + say "Processing $path"; + + my $file = open_file $path; + my $year = year $file; + + while (<$file>) { + next if contains($_, 'logfile turned over'); + + # last == true means: After this file, don't process more + $last = true unless defined $cb->($year, split / +/); + } + + say "Closing $path (last:$last)"; + close $file; + last LAST if $last; + } + } + + # Sub: parse_web_logs + # - Purpose: Parse web log lines into normalized events and pass to callback. + # - Params: $last_processed_date (YYYYMMDD int) lower bound; $cb (code) event consumer. + # - Return: undef. + sub parse_web_logs ($last_processed_date, $cb) { + my sub parse_date ($date) { + my $t = Time::Piece->strptime($date, '[%d/%b/%Y:%H:%M:%S'); + return ($t->strftime('%Y%m%d'), $t->strftime('%H%M%S')); + } + + my sub parse_web_line (@line) { + my ($date, $time) = parse_date $line [4]; + return undef if $date < $last_processed_date; + + # X-Forwarded-For? + my $ip = $line[-2] eq '-' ? $line[1] : $line[-2]; + my ($ip_hash, $ip_proto) = anonymize_ip $ip; + + return { + proto => 'web', + host => $line[0], + ip_hash => $ip_hash, + ip_proto => $ip_proto, + date => $date, + time => $time, + uri_path => $line[7], + status => $line[9], + }; + } + + read_lines web_logs_glob(), sub ($year, @line) { + $cb->(parse_web_line @line); + }; + } + + # Sub: parse_gemini_logs + # - Purpose: Parse vger/relayd lines, merge paired entries, and emit events. + # - Params: $last_processed_date (YYYYMMDD int); $cb (code) event consumer. + # - Return: undef. + sub parse_gemini_logs ($last_processed_date, $cb) { + my sub parse_date ($year, @line) { + my $timestr = "$line[0] $line[1]"; + return Time::Piece->strptime($timestr, '%b %d')->strftime("$year%m%d"); + } + + my sub parse_vger_line ($year, @line) { + my $full_path = $line[5]; + $full_path =~ s/"//g; + my ($proto, undef, $host, $uri_path) = split '/', $full_path, 4; + $uri_path = '' unless defined $uri_path; + + return { + proto => 'gemini', + host => $host, + uri_path => "/$uri_path", + status => $line[6], + date => int(parse_date($year, @line)), + time => $line[2], + }; + } + + my sub parse_relayd_line ($year, @line) { + my $date = int(parse_date($year, @line)); + + my ($ip_hash, $ip_proto) = anonymize_ip $line [12]; + return { + ip_hash => $ip_hash, + ip_proto => $ip_proto, + date => $date, + time => $line[2], + }; + } + + # Expect one vger and one relayd log line per event! So collect + # both events (one from one log line each) and then merge the result hash! + my ($vger, $relayd); + read_lines gemini_logs_glob(), sub ($year, @line) { + if ($line[4] eq 'vger:') { + $vger = parse_vger_line $year, @line; + } + elsif ($line[5] eq 'relay' and startswith($line[6], 'gemini')) { + $relayd = parse_relayd_line $year, @line; + return undef + if $relayd->{date} < $last_processed_date; + } + + if (defined $vger and defined $relayd and $vger->{time} eq $relayd->{time}) { + $cb->({ %$vger, %$relayd }); + $vger = $relayd = undef; + } + + true; + }; + } + + # Sub: parse_logs + # - Purpose: Coordinate parsing for both web and gemini, aggregating into stats. + # - Params: $last_web_date, $last_gemini_date (YYYYMMDD int), $odds_file, $odds_log. + # - Return: stats hashref keyed by "proto_YYYYMMDD". + sub parse_logs ($last_web_date, $last_gemini_date, $odds_file, $odds_log) { + my $agg = Foostats::Aggregator->new($odds_file, $odds_log); + + say "Last web date: $last_web_date"; + say "Last gemini date: $last_gemini_date"; + + parse_web_logs $last_web_date, sub ($event) { + $agg->add($event); + }; + parse_gemini_logs $last_gemini_date, sub ($event) { + $agg->add($event); + }; + + return $agg->{stats}; + } +} + +# Package: Foostats::Filter — request filtering and logging +# - Purpose: Identify odd URI patterns and excessive requests per second per IP. +# - Notes: Maintains an in-process blocklist for the current run. +package Foostats::Filter { + use String::Util qw(contains startswith endswith); + + # Sub: new + # - Purpose: Construct a filter with odd patterns and a log path. + # - Params: $odds_file (str) pattern list; $log_path (str) append-only log file. + # - Return: blessed Foostats::Filter instance. + sub new ($class, $odds_file, $log_path) { + say "Logging filter to $log_path"; + my @odds = FileHelper::read_lines($odds_file); + bless { odds => \@odds, log_path => $log_path }, $class; + } + + # Sub: ok + # - Purpose: Check if an event passes filters; updates block state/logging. + # - Params: $event (hashref) normalized request. + # - Return: true if allowed; false if blocked. + sub ok ($self, $event) { + state %blocked = (); + return false if exists $blocked{ $event->{ip_hash} }; + + if ($self->odd($event) or $self->excessive($event)) { + ($blocked{ $event->{ip_hash} } //= 0)++; + return false; + } + else { + return true; + } + } + + # Sub: odd + # - Purpose: Match URI path against user-provided odd patterns (substring match). + # - Params: $event (hashref) with uri_path. + # - Return: true if odd (blocked), false otherwise. + sub odd ($self, $event) { + \my $uri_path = \$event->{uri_path}; + + for ($self->{odds}->@*) { + next if !defined $_ || $_ eq '' || /^\s*#/; + next unless contains($uri_path, $_); + $self->log('WARN', $uri_path, "contains $_ and is odd and will therefore be blocked!"); + return true; + } + + $self->log('OK', $uri_path, "appears fine..."); + return false; + } + + # Sub: log + # - Purpose: Deduplicated append-only logging for filter decisions. + # - Params: $severity (OK|WARN), $subject (str), $message (str). + # - Return: undef. + sub log ($self, $severity, $subject, $message) { + state %dedup; + + # Don't log if path was already logged + return if exists $dedup{$subject}; + $dedup{$subject} = 1; + + open(my $fh, '>>', $self->{log_path}) or die $self->{log_path} . ": $!"; + print $fh "$severity: $subject $message\n"; + close($fh); + } + + # Sub: excessive + # - Purpose: Block if an IP makes more than one request within the same second. + # - Params: $event (hashref) with time and ip_hash. + # - Return: true if blocked; false otherwise. + sub excessive ($self, $event) { + \my $time = \$event->{time}; + \my $ip_hash = \$event->{ip_hash}; + + state $last_time = $time; # Time with second: 'HH:MM:SS' + state %count = (); # IPs accessing within the same second! + + if ($last_time ne $time) { + $last_time = $time; + %count = (); + return false; + } + + # IP requested site more than once within the same second!? + if (1 < ++($count{$ip_hash} //= 0)) { + $self->log('WARN', $ip_hash, "blocked due to excessive requesting..."); + return true; + } + + return false; + } +} + +# Package: Foostats::Aggregator — in-memory stats builder +# - Purpose: Apply filters and accumulate counts, unique IPs per feed/page. +package Foostats::Aggregator { + use String::Util qw(contains startswith endswith); + + use constant { + ATOM_FEED_URI => '/gemfeed/atom.xml', + GEMFEED_URI => '/gemfeed/index.gmi', + GEMFEED_URI_2 => '/gemfeed/', + }; + + # Sub: new + # - Purpose: Construct aggregator with a filter and empty stats store. + # - Params: $odds_file (str), $odds_log (str). + # - Return: Foostats::Aggregator instance. + sub new ($class, $odds_file, $odds_log) { + bless { filter => Foostats::Filter->new($odds_file, $odds_log), stats => {} }, $class; + } + + # Sub: add + # - Purpose: Apply filter, update counts and unique-IP sets, and return event. + # - Params: $event (hashref) normalized event; ignored if undef. + # - Return: $event; filtered events increment filtered count only. + sub add ($self, $event) { + return undef unless defined $event; + + my $date = $event->{date}; + my $date_key = $event->{proto} . "_$date"; + + # Stats data model per protocol+day (key: "proto_YYYYMMDD"): + # - count: per-proto request count, per IP version, and filtered count + # - feed_ips: unique IPs per feed type (atom_feed, gemfeed) + # - page_ips: unique IPs per host and per URL + $self->{stats}{$date_key} //= { + count => { filtered => 0, }, + feed_ips => { + atom_feed => {}, + gemfeed => {}, + }, + page_ips => { + hosts => {}, + urls => {}, + }, + }; + + \my $s = \$self->{stats}{$date_key}; + unless ($self->{filter}->ok($event)) { + $s->{count}{filtered}++; + return $event; + } + + $self->add_count($s, $event); + $self->add_page_ips($s, $event) unless $self->add_feed_ips($s, $event); + return $event; + } + + # Sub: add_count + # - Purpose: Increment totals by protocol and IP version. + # - Params: $stats (hashref) date bucket; $event (hashref). + # - Return: undef. + sub add_count ($self, $stats, $event) { + \my $c = \$stats->{count}; + \my $e = \$event; + + ($c->{ $e->{proto} } //= 0)++; + ($c->{ $e->{ip_proto} } //= 0)++; + } + + # Sub: add_feed_ips + # - Purpose: If event hits feed endpoints, add unique IP and short-circuit. + # - Params: $stats (hashref), $event (hashref). + # - Return: 1 if feed matched; 0 otherwise. + sub add_feed_ips ($self, $stats, $event) { + \my $f = \$stats->{feed_ips}; + \my $e = \$event; + + # Atom feed (exact path match, allow optional query string) + if ($e->{uri_path} =~ m{^/gemfeed/atom\.xml(?:[?#].*)?$}) { + ($f->{atom_feed}->{ $e->{ip_hash} } //= 0)++; + return 1; + } + + # Gemfeed index: '/gemfeed/' or '/gemfeed/index.gmi' (optionally with query) + if ($e->{uri_path} =~ m{^/gemfeed/(?:index\.gmi)?(?:[?#].*)?$}) { + ($f->{gemfeed}->{ $e->{ip_hash} } //= 0)++; + return 1; + } + + return 0; + } + + # Sub: add_page_ips + # - Purpose: Track unique IPs per host and per URL for .html/.gmi pages. + # - Params: $stats (hashref), $event (hashref). + # - Return: undef. + sub add_page_ips ($self, $stats, $event) { + \my $e = \$event; + \my $p = \$stats->{page_ips}; + + return if !endswith($e->{uri_path}, '.html') && !endswith($e->{uri_path}, '.gmi'); + + ($p->{hosts}->{ $e->{host} }->{ $e->{ip_hash} } //= 0)++; + ($p->{urls}->{ $e->{host} . $e->{uri_path} }->{ $e->{ip_hash} } //= 0)++; + } +} + +# Package: Foostats::FileOutputter — write per-day stats to disk +# - Purpose: Persist aggregated stats to gzipped JSON files under a stats dir. +package Foostats::FileOutputter { + use JSON; + use Sys::Hostname; + use PerlIO::gzip; + + # Sub: new + # - Purpose: Create outputter with stats_dir; ensures directory exists. + # - Params: %args (hash) must include stats_dir. + # - Return: Foostats::FileOutputter instance. + sub new ($class, %args) { + my $self = bless \%args, $class; + mkdir $self->{stats_dir} or die $self->{stats_dir} . ": $!" unless -d $self->{stats_dir}; + return $self; + } + + # Sub: last_processed_date + # - Purpose: Determine the most recent processed date for a protocol for this host. + # - Params: $proto (str) 'web' or 'gemini'. + # - Return: YYYYMMDD int (0 if none found). + sub last_processed_date ($self, $proto) { + my $hostname = hostname(); + my @processed = glob $self->{stats_dir} . "/${proto}_????????.$hostname.json.gz"; + my ($date) = @processed ? ($processed[-1] =~ /_(\d{8})\.$hostname\.json.gz/) : 0; + return int($date); + } + + # Sub: write + # - Purpose: Write one gzipped JSON file per date bucket to stats_dir. + # - Params: none (uses $self->{stats}). + # - Return: undef. + sub write ($self) { + $self->for_dates( + sub ($self, $date_key, $stats) { + my $hostname = hostname(); + my $path = $self->{stats_dir} . "/${date_key}.$hostname.json.gz"; + FileHelper::write_json_gz $path, $stats; + } + ); + } + + # Sub: for_dates + # - Purpose: Iterate date-keyed stats in sorted order and call $cb. + # - Params: $cb (code) receives ($self, $date_key, $stats). + # - Return: undef. + sub for_dates ($self, $cb) { + $cb->($self, $_, $self->{stats}{$_}) for sort keys $self->{stats}->%*; + } +} + +# Package: Foostats::Replicator — pull partner stats files over HTTP(S) +# - Purpose: Fetch recent partner node stats into local stats dir. +package Foostats::Replicator { + use JSON; + use File::Basename; + use LWP::UserAgent; + use String::Util qw(endswith); + + # Sub: replicate + # - Purpose: For each proto and last 31 days, replicate newest files. + # - Params: $stats_dir (str) local dir; $partner_node (str) hostname. + # - Return: undef (best-effort fetches). + sub replicate ($stats_dir, $partner_node) { + say "Replicating from $partner_node"; + + for my $proto (qw(gemini web)) { + my $count = 0; + + for my $date (DateHelper::last_month_dates) { + my $file_base = "${proto}_${date}"; + my $dest_path = "${file_base}.$partner_node.json.gz"; + + replicate_file( + "https://$partner_node/foostats/$dest_path", + "$stats_dir/$dest_path", + $count++ < 3, # Always replicate the newest 3 files. + ); + } + } + } + + # Sub: replicate_file + # - Purpose: Download a single URL to a destination unless already present (unless forced). + # - Params: $remote_url (str) source; $dest_path (str) destination; $force (bool/int). + # - Return: undef; logs failures. + sub replicate_file ($remote_url, $dest_path, $force) { + + # $dest_path already exists, not replicating it + return if !$force && -f $dest_path; + + say "Replicating $remote_url to $dest_path (force:$force)... "; + my $response = LWP::UserAgent->new->get($remote_url); + unless ($response->is_success) { + say "\nFailed to fetch the file: " . $response->status_line; + return; + } + + FileHelper::write $dest_path, $response->decoded_content; + say 'done'; + } +} + +# Package: Foostats::Merger — merge per-host daily stats into a single view +# - Purpose: Merge multiple node files per day into totals and unique counts. +package Foostats::Merger { + + # Sub: merge + # - Purpose: Produce merged stats for the last month (date => stats hashref). + # - Params: $stats_dir (str) directory with daily gz JSON files. + # - Return: hash (not ref) of date => merged stats. + sub merge ($stats_dir) { + my %merge; + $merge{$_} = merge_for_date($stats_dir, $_) for DateHelper::last_month_dates; + return %merge; + } + + # Sub: merge_for_date + # - Purpose: Merge all node files for a specific date into one stats hashref. + # - Params: $stats_dir (str), $date (YYYYMMDD str/int). + # - Return: { feed_ips => {...}, count => {...}, page_ips => {...} }. + sub merge_for_date ($stats_dir, $date) { + printf "Merging for date %s\n", $date; + my @stats = stats_for_date($stats_dir, $date); + return { + feed_ips => feed_ips(@stats), + count => count(@stats), + page_ips => page_ips(@stats), + }; + } + + # Sub: merge_ips + # - Purpose: Deep-ish merge helper: sums numbers, merges hash-of-hash counts. + # - Params: $a (hashref target), $b (hashref source), $key_transform (code|undef). + # - Return: undef; updates $a in place; dies on incompatible types. + sub merge_ips ($a, $b, $key_transform = undef) { + my sub merge ($a, $b) { + while (my ($key, $val) = each %$b) { + $a->{$key} //= 0; + $a->{$key} += $val; + } + } + + my $is_num = qr/^\d+(\.\d+)?$/; + + while (my ($key, $val) = each %$b) { + $key = $key_transform->($key) if defined $key_transform; + + if (not exists $a->{$key}) { + $a->{$key} = $val; + } + elsif (ref($a->{$key}) eq 'HASH' && ref($val) eq 'HASH') { + merge($a->{$key}, $val); + } + elsif ($a->{$key} =~ $is_num && $val =~ $is_num) { + $a->{$key} += $val; + } + else { + die "Not merging tkey '%s' (ref:%s): '%s' (ref:%s) with '%s' (ref:%s)\n", + $key, + ref($key), $a->{$key}, + ref($a->{$key}), + $val, + ref($val); + } + } + } + + # Sub: feed_ips + # - Purpose: Merge feed unique-IP sets from per-proto stats into totals. + # - Params: @stats (list of stats hashrefs) each with {proto, feed_ips}. + # - Return: hashref with Total and per-proto feed counts. + sub feed_ips (@stats) { + my (%gemini, %web); + + for my $stats (@stats) { + my $merge = $stats->{proto} eq 'web' ? \%web : \%gemini; + printf "Merging proto %s feed IPs\n", $stats->{proto}; + merge_ips($merge, $stats->{feed_ips}); + } + + my %total; + merge_ips(\%total, $web{$_}) for keys %web; + merge_ips(\%total, $gemini{$_}) for keys %gemini; + + my %merge = ( + 'Total' => scalar keys %total, + 'Gemini Gemfeed' => scalar keys $gemini{gemfeed}->%*, + 'Gemini Atom' => scalar keys $gemini{atom_feed}->%*, + 'Web Gemfeed' => scalar keys $web{gemfeed}->%*, + 'Web Atom' => scalar keys $web{atom_feed}->%*, + ); + + return \%merge; + } + + # Sub: count + # - Purpose: Sum request counters across stats for the day. + # - Params: @stats (list of stats hashrefs) each with {count}. + # - Return: hashref of summed counters. + sub count (@stats) { + my %merge; + + for my $stats (@stats) { + while (my ($key, $val) = each $stats->{count}->%*) { + $merge{$key} //= 0; + $merge{$key} += $val; + } + } + + return \%merge; + } + + # Sub: page_ips + # - Purpose: Merge unique IPs per host and per URL; coalesce truncated endings. + # - Params: @stats (list of stats hashrefs) with {page_ips}{urls,hosts}. + # - Return: hashref with urls/hosts each mapping => unique counts. + sub page_ips (@stats) { + my %merge = ( + urls => {}, + hosts => {} + ); + + for my $key (keys %merge) { + merge_ips( + $merge{$key}, + $_->{page_ips}->{$key}, + sub ($key) { + $key =~ s/\.gmi$/\.html/; + $key; + } + ) for @stats; + + # Keep only uniq IP count + $merge{$key}->{$_} = scalar keys $merge{$key}->{$_}->%* for keys $merge{$key}->%*; + } + + return \%merge; + } + + # Sub: stats_for_date + # - Purpose: Load all stats files for a date across protos; tag proto/path. + # - Params: $stats_dir (str), $date (YYYYMMDD). + # - Return: list of stats hashrefs. + sub stats_for_date ($stats_dir, $date) { + my @stats; + + for my $proto (qw(gemini web)) { + for my $path (<$stats_dir/${proto}_${date}.*.json.gz>) { + printf "Reading %s\n", $path; + push @stats, FileHelper::read_json_gz($path); + @{ $stats[-1] }{qw(proto path)} = ($proto, $path); + } + } + + return @stats; + } +} + +# Package: Foostats::Reporter — build gemtext/HTML daily and summary reports +# - Purpose: Render daily reports and rolling summaries (30/365), and index pages. +package Foostats::Reporter { + use Time::Piece; + use HTML::Entities qw(encode_entities); + + our @TRUNCATED_URL_MAPPINGS; + + sub reset_truncated_url_mappings { @TRUNCATED_URL_MAPPINGS = (); } + + sub _record_truncated_url_mapping { + my ($truncated, $original) = @_; + push @TRUNCATED_URL_MAPPINGS, { truncated => $truncated, original => $original }; + } + + sub _lookup_full_url_for { + my ($candidate) = @_; + for my $idx (0 .. $#TRUNCATED_URL_MAPPINGS) { + my $entry = $TRUNCATED_URL_MAPPINGS[$idx]; + next unless $entry->{truncated} eq $candidate; + my $original = $entry->{original}; + splice @TRUNCATED_URL_MAPPINGS, $idx, 1; + return $original; + } + return undef; + } + + # Sub: truncate_url + # - Purpose: Middle-ellipsize long URLs to fit within a target length. + # - Params: $url (str), $max_length (int default 100). + # - Return: possibly truncated string. + sub truncate_url { + my ($url, $max_length) = @_; + $max_length //= 100; # Default to 100 characters + return $url if length($url) <= $max_length; + + # Calculate how many characters we need to remove + my $ellipsis = '...'; + my $ellipsis_length = length($ellipsis); + my $available_length = $max_length - $ellipsis_length; + + # Split available length between start and end, favoring the end + my $keep_start = int($available_length * 0.4); # 40% for start + my $keep_end = $available_length - $keep_start; # 60% for end + + my $start = substr($url, 0, $keep_start); + my $end = substr($url, -$keep_end); + + return $start . $ellipsis . $end; + } + + # Sub: truncate_urls_for_table + # - Purpose: Truncate URL cells in-place to fit target table width. + # - Params: $url_rows (arrayref of [url,count]), $count_column_header (str). + # - Return: undef; mutates $url_rows. + sub truncate_urls_for_table { + my ($url_rows, $count_column_header) = @_; + + # Calculate the maximum width needed for the count column + my $max_count_width = length($count_column_header); + for my $row (@$url_rows) { + my $count_width = length($row->[1]); + $max_count_width = $count_width if $count_width > $max_count_width; + } + + # Row format: "| URL... | count |" with padding + # Calculate: "| " (2) + URL + " | " (3) + count_with_padding + " |" (2) + my $max_url_length = 100 - 7 - $max_count_width; + $max_url_length = 70 if $max_url_length > 70; # Cap at reasonable length + + # Truncate URLs in place + for my $row (@$url_rows) { + my $original = $row->[0]; + my $truncated = truncate_url($original, $max_url_length); + if ($truncated ne $original) { + _record_truncated_url_mapping($truncated, $original); + } + $row->[0] = $truncated; + } + } + + # Sub: format_table + # - Purpose: Render a simple monospace table from headers and rows. + # - Params: $headers (arrayref), $rows (arrayref of arrayrefs). + # - Return: string with lines separated by \n. + sub format_table { + my ($headers, $rows) = @_; + + my @widths; + for my $col (0 .. $#{$headers}) { + my $max_width = length($headers->[$col]); + for my $row (@$rows) { + my $len = length($row->[$col]); + $max_width = $len if $len > $max_width; + } + push @widths, $max_width; + } + + my $header_line = '|'; + my $separator_line = '|'; + for my $col (0 .. $#{$headers}) { + $header_line .= sprintf(" %-*s |", $widths[$col], $headers->[$col]); + $separator_line .= '-' x ($widths[$col] + 2) . '|'; + } + + my @table_lines; + push @table_lines, $separator_line; # Add top terminator + push @table_lines, $header_line; + push @table_lines, $separator_line; + + for my $row (@$rows) { + my $row_line = '|'; + for my $col (0 .. $#{$row}) { + $row_line .= sprintf(" %-*s |", $widths[$col], $row->[$col]); + } + push @table_lines, $row_line; + } + + push @table_lines, $separator_line; # Add bottom terminator + + return join("\n", @table_lines); + } + + # Convert gemtext to HTML + # Sub: gemtext_to_html + # - Purpose: Convert a subset of Gemtext to compact HTML, incl. code blocks and lists. + # - Params: $content (str) Gemtext. + # - Return: HTML string (fragment). + sub gemtext_to_html { + my ($content) = @_; + my $html = ""; + my @lines = split /\n/, $content; + my $i = 0; + + while ($i < @lines) { + my $line = $lines[$i]; + + if ($line =~ /^```/) { + my @block_lines; + $i++; # Move past the opening ``` + while ($i < @lines && $lines[$i] !~ /^```/) { + push @block_lines, $lines[$i]; + $i++; + } + $html .= _gemtext_to_html_code_block(\@block_lines); + } + elsif ($line =~ /^### /) { + $html .= _gemtext_to_html_heading($line); + } + elsif ($line =~ /^## /) { + $html .= _gemtext_to_html_heading($line); + } + elsif ($line =~ /^# /) { + $html .= _gemtext_to_html_heading($line); + } + elsif ($line =~ /^=> /) { + $html .= _gemtext_to_html_link($line); + } + elsif ($line =~ /^\* /) { + my @list_items; + while ($i < @lines && $lines[$i] =~ /^\* /) { + push @list_items, $lines[$i]; + $i++; + } + $html .= _gemtext_to_html_list(\@list_items); + $i--; # Decrement to re-evaluate the current line in the outer loop + } + elsif ($line !~ /^\s*$/) { + $html .= _gemtext_to_html_paragraph($line); + } + + # Else, it's a blank line, which we skip for compact output. + $i++; + } + + return $html; + } + + sub _gemtext_to_html_code_block { + my ($lines) = @_; + if (is_ascii_table($lines)) { + return convert_ascii_table_to_html($lines); + } + else { + my $html = "<pre>\n"; + for my $code_line (@$lines) { + $html .= encode_entities($code_line) . "\n"; + } + $html .= "</pre>\n"; + return $html; + } + } + + sub _gemtext_to_html_heading { + my ($line) = @_; + if ($line =~ /^### (.*)/) { + return "<h3>" . encode_entities($1) . "</h3>\n"; + } + elsif ($line =~ /^## (.*)/) { + return "<h2>" . encode_entities($1) . "</h2>\n"; + } + elsif ($line =~ /^# (.*)/) { + return "<h1>" . encode_entities($1) . "</h1>\n"; + } + return ''; + } + + sub _gemtext_to_html_link { + my ($line) = @_; + if ($line =~ /^=> (\S+)\s+(.*)/) { + my ($url, $text) = ($1, $2); + + # Drop 365-day summary links from HTML output + return '' if $url =~ /(?:^|[\/.])365day_summary_\d{8}\.gmi$/; + + # Convert .gmi links to .html + $url =~ s/\.gmi$/\.html/; + return + "<p><a href=\"" + . encode_entities($url) . "\">" + . encode_entities($text) + . "</a></p>\n"; + } + return ''; + } + + sub _gemtext_to_html_list { + my ($lines) = @_; + my $html = "<ul>\n"; + for my $line (@$lines) { + if ($line =~ /^\* (.*)/) { + $html .= "<li>" . linkify_text($1) . "</li>\n"; + } + } + $html .= "</ul>\n"; + return $html; + } + + sub _gemtext_to_html_paragraph { + my ($line) = @_; + return "<p>" . linkify_text($line) . "</p>\n"; + } + + # Check if the lines form an ASCII table + # Sub: is_ascii_table + # - Purpose: Heuristically detect if a code block is an ASCII table. + # - Params: $lines (arrayref of strings). + # - Return: 1 if likely table; 0 otherwise. + sub is_ascii_table { + my ($lines) = @_; + return 0 if @$lines < 3; # Need at least header, separator, and one data row + + # Check for separator lines with dashes and pipes + for my $line (@$lines) { + return 1 if $line =~ /^\|?[\s\-]+\|/; + } + return 0; + } + + # Convert ASCII table to HTML table + # Sub: convert_ascii_table_to_html + # - Purpose: Convert simple ASCII table lines to an HTML <table>. + # - Params: $lines (arrayref of strings). + # - Return: HTML string. + sub convert_ascii_table_to_html { + my ($lines) = @_; + my $html = "<table>\n"; + my $row_count = 0; + my $total_col_idx = -1; + + for my $line (@$lines) { + + # Skip separator lines + next if $line =~ /^\|?[\s\-]+\|/ && $line =~ /\-/; + + # Parse table row + my @cells = split /\s*\|\s*/, $line; + @cells = grep { length($_) > 0 } @cells; # Remove empty cells + + if (@cells) { + my $is_total_row = (trim($cells[0]) eq 'Total'); + $html .= "<tr>\n"; + + if ($row_count == 0) { # Header row + for my $i (0 .. $#cells) { + if (trim($cells[$i]) eq 'Total') { + $total_col_idx = $i; + last; + } + } + } + + my $tag = ($row_count == 0) ? "th" : "td"; + for my $i (0 .. $#cells) { + my $val = trim($cells[$i]); + my $cell_content = linkify_text($val); + + if ($is_total_row || ($i == $total_col_idx && $row_count > 0)) { + $html .= " <$tag><b>" . $cell_content . "</b></$tag>\n"; + } + else { + $html .= " <$tag>" . $cell_content . "</$tag>\n"; + } + } + $html .= "</tr>\n"; + $row_count++; + } + } + + $html .= "</table>\n"; + return $html; + } + + # Trim whitespace from string + # Sub: trim + # - Purpose: Strip leading/trailing whitespace. + # - Params: $str (str). + # - Return: trimmed string. + sub trim { + my ($str) = @_; + $str =~ s/^\s+//; + $str =~ s/\s+$//; + return $str; + } + + # Build an href for a token that looks like a URL or FQDN + # Sub: _guess_href + # - Purpose: Infer absolute href for a token (supports gemini for .gmi). + # - Params: $token (str) token from text. + # - Return: href string or undef. + sub _guess_href { + my ($token) = @_; + my $t = $token; + $t =~ s/^\s+//; + $t =~ s/\s+$//; + + # Already absolute http(s) + return $t if $t =~ m{^https?://}i; + + # Extract trailing punctuation to avoid including it in href + my $trail = ''; + if ($t =~ s{([)\]\}.,;:!?]+)$}{}) { $trail = $1; } + + # host[/path] + if ($t =~ m{^([A-Za-z0-9.-]+\.[A-Za-z]{2,})(/[^\s<]*)?$}) { + my ($host, $path) = ($1, $2 // ''); + my $is_gemini = defined($path) && $path =~ /\.gmi(?:[?#].*)?$/i; + my $scheme = 'https'; + + # If truncated, fall back to host root + my $href = sprintf('%s://%s%s', $scheme, $host, ($path eq '' ? '/' : $path)); + return ($href . $trail); + } + + return undef; + } + + # Turn any URLs/FQDNs in the provided text into anchors + # Sub: linkify_text + # - Purpose: Replace URL/FQDN tokens in text with HTML anchors. + # - Params: $text (str) input text. + # - Return: HTML string with entities encoded. + sub linkify_text { + my ($text) = @_; + return '' unless defined $text; + + my $out = ''; + my $pos = 0; + while ($text =~ m{((?:https?://)?[A-Za-z0-9.-]+\.[A-Za-z]{2,}(?:/[^\s<]*)?)}g) { + my $match = $1; + my $start = $-[1]; + my $end = $+[1]; + + # Emit preceding text + $out .= encode_entities(substr($text, $pos, $start - $pos)); + + # Separate trailing punctuation from the match + my ($core, $trail) = ($match, ''); + if ($core =~ s{([)\]\}.,;:!?]+)$}{}) { $trail = $1; } + + my $display = $core; + if (my $full = _lookup_full_url_for($core)) { + $display = $full; + } + + my $href = _guess_href($display); + if (!$href) { + $href = _guess_href($core); + } + + if ($href) { + $href =~ s/\.gmi$/\.html/i; + $out .= sprintf( + '<a href="%s">%s</a>%s', + encode_entities($href), encode_entities($display), + encode_entities($trail) + ); + } + else { + # Not a linkable token after all + $out .= encode_entities($match); + } + $pos = $end; + } + + # Remainder + $out .= encode_entities(substr($text, $pos)); + return $out; + } + + # Use HTML::Entities::encode_entities imported above + + # Generate HTML wrapper + # Sub: generate_html_page + # - Purpose: Wrap content in a minimal HTML5 page with a title and CSS reset. + # - Params: $title (str), $content (str) HTML fragment. + # - Return: full HTML page string. + sub generate_html_page { + my ($title, $content) = @_; + return qq{<!DOCTYPE html> +<html lang="en"> +<head> + <meta charset="UTF-8"> + <meta name="viewport" content="width=device-width, initial-scale=1.0"> + <title>$title</title> + <style> + /* Compact, full-width layout */ + :root { + --pad: 8px; + } + html, body { + height: 100%; + } + body { + font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; + line-height: 1.3; + margin: 0; + padding: var(--pad); + background: #fff; + color: #000; + } + /* Headings: smaller and tighter */ + h1, h2, h3 { margin: 0.5em 0 0.25em; font-weight: 600; } + h1 { font-size: 1em; } + h2 { font-size: 0.95em; } + h3 { font-size: 0.9em; } + /* Paragraphs and lists: minimal vertical rhythm */ + p { margin: 0.2em 0; } + ul { margin: 0.3em 0; padding-left: 1.2em; } + li { margin: 0.1em 0; } + /* Code blocks and tables */ + pre { + overflow-x: auto; + white-space: pre; + margin: 0.3em 0; + } + table { + border-collapse: collapse; + table-layout: auto; /* size columns by content */ + width: auto; /* do not stretch to full width */ + max-width: 100%; + margin: 0.5em 0; + font-size: 0.95em; + display: inline-table; /* keep as compact as content allows */ + } + th, td { + padding: 0.1em 0.3em; + text-align: left; + white-space: nowrap; /* avoid wide columns caused by wrapping */ + } + /* Links */ + a { color: #06c; text-decoration: underline; } + a:visited { color: #639; } + /* Rules */ + hr { border: none; border-top: 1px solid #ccc; margin: 0.5em 0; } + </style> +</head> +<body> +$content +</body> +</html> +}; + } + + # Sub: should_generate_daily_report + # - Purpose: Check if a daily report should be generated based on file existence and age. + # - Params: $date (YYYYMMDD), $report_path (str), $html_report_path (str). + # - Return: 1 if report should be generated, 0 otherwise. + sub should_generate_daily_report { + my ($date, $report_path, $html_report_path) = @_; + + my ($year, $month, $day) = $date =~ /(\d{4})(\d{2})(\d{2})/; + + # Calculate age of the data based on date in filename + my $today = Time::Piece->new(); + my $file_date = Time::Piece->strptime($date, '%Y%m%d'); + my $age_days = ($today - $file_date) / (24 * 60 * 60); + + if (-e $report_path && -e $html_report_path) { + + # Files exist + if ($age_days <= 3) { + + # Data is recent (within 3 days), regenerate it + say "Regenerating daily report for $year-$month-$day (data age: " + . sprintf("%.1f", $age_days) + . " days)"; + return 1; + } + else { + # Data is old (older than 3 days), skip if files exist + say "Skipping daily report for $year-$month-$day (files exist, data age: " + . sprintf("%.1f", $age_days) + . " days)"; + return 0; + } + } + else { + # File doesn't exist, generate it + say "Generating new daily report for $year-$month-$day (file doesn't exist, data age: " + . sprintf("%.1f", $age_days) + . " days)"; + return 1; + } + } + + sub generate_feed_stats_section { + my ($stats) = @_; + my $report_content = "### Feed Statistics\n\n"; + my @feed_rows; + push @feed_rows, [ 'Total', $stats->{feed_ips}{'Total'} // 0 ]; + push @feed_rows, [ 'Gemini Gemfeed', $stats->{feed_ips}{'Gemini Gemfeed'} // 0 ]; + push @feed_rows, [ 'Gemini Atom', $stats->{feed_ips}{'Gemini Atom'} // 0 ]; + push @feed_rows, [ 'Web Gemfeed', $stats->{feed_ips}{'Web Gemfeed'} // 0 ]; + push @feed_rows, [ 'Web Atom', $stats->{feed_ips}{'Web Atom'} // 0 ]; + $report_content .= "```\n"; + $report_content .= format_table([ 'Feed Type', 'Count' ], \@feed_rows); + $report_content .= "\n```\n\n"; + return $report_content; + } + + sub generate_top_n_table { + my (%args) = @_; + my $title = $args{title}; + my $data = $args{data}; + my $headers = $args{headers}; + my $limit = $args{limit} // 50; + my $is_url = $args{is_url} // 0; + + my $report_content = "### $title\n\n"; + my @rows; + my @sorted_keys = + sort { ($data->{$b} // 0) <=> ($data->{$a} // 0) } + keys %$data; + my $truncated = @sorted_keys > $limit; + @sorted_keys = @sorted_keys[ 0 .. $limit - 1 ] if $truncated; + + for my $key (@sorted_keys) { + push @rows, [ $key, $data->{$key} // 0 ]; + } + + if ($is_url) { + truncate_urls_for_table(\@rows, $headers->[1]); + } + + $report_content .= "```\n"; + $report_content .= format_table($headers, \@rows); + $report_content .= "\n```\n"; + if ($truncated) { + $report_content .= "\n... and more (truncated to $limit entries).\n"; + } + $report_content .= "\n"; + return $report_content; + } + + sub generate_top_urls_section { + my ($stats) = @_; + return generate_top_n_table( + title => 'Top 50 URLs', + data => $stats->{page_ips}{urls}, + headers => [ 'URL', 'Unique Visitors' ], + is_url => 1, + ); + } + + sub generate_top_hosts_section { + my ($stats) = @_; + return generate_top_n_table( + title => 'Page Statistics (by Host)', + data => $stats->{page_ips}{hosts}, + headers => [ 'Host', 'Unique Visitors' ], + ); + } + + sub generate_summary_section { + my ($stats) = @_; + my $report_content = "### Summary\n\n"; + my $total_requests = + ($stats->{count}{gemini} // 0) + ($stats->{count}{web} // 0); + $report_content .= "* Total requests: $total_requests\n"; + $report_content .= + "* Filtered requests: " . ($stats->{count}{filtered} // 0) . "\n"; + $report_content .= + "* Gemini requests: " . ($stats->{count}{gemini} // 0) . "\n"; + $report_content .= + "* Web requests: " . ($stats->{count}{web} // 0) . "\n"; + $report_content .= + "* IPv4 requests: " . ($stats->{count}{IPv4} // 0) . "\n"; + $report_content .= + "* IPv6 requests: " . ($stats->{count}{IPv6} // 0) . "\n\n"; + return $report_content; + } + + # Sub: report + # - Purpose: Generate daily .gmi and .html reports per date, then summaries and index. + # - Params: $stats_dir, $output_dir, $html_output_dir, %merged (date => stats). + # - Return: undef. + sub report { + my ($stats_dir, $output_dir, $html_output_dir, %merged) = @_; + for my $date (sort { $b cmp $a } keys %merged) { + my $stats = $merged{$date}; + next unless $stats->{count}; + + my ($year, $month, $day) = $date =~ /(\d{4})(\d{2})(\d{2})/; + + my $report_path = "$output_dir/$date.gmi"; + my $html_report_path = "$html_output_dir/$date.html"; + + next unless should_generate_daily_report($date, $report_path, $html_report_path); + + reset_truncated_url_mappings(); + my $report_content = "## Stats for $year-$month-$day\n\n"; + $report_content .= generate_feed_stats_section($stats); + $report_content .= generate_top_urls_section($stats); + $report_content .= generate_top_hosts_section($stats); + $report_content .= generate_summary_section($stats); + + # Add links to summary reports (only monthly) + $report_content .= "## Related Reports\n\n"; + my $now = localtime; + my $current_date = $now->strftime('%Y%m%d'); + $report_content .= "=> ./30day_summary_$current_date.gmi 30-Day Summary Report\n\n"; + + # Ensure output directory exists + mkdir $output_dir unless -d $output_dir; + + # $report_path already defined above + say "Writing report to $report_path"; + FileHelper::write($report_path, $report_content); + + # Also write HTML version + mkdir $html_output_dir unless -d $html_output_dir; + my $html_path = "$html_output_dir/$date.html"; + my $html_content = gemtext_to_html($report_content); + my $html_page = generate_html_page("Stats for $year-$month-$day", $html_content); + say "Writing HTML report to $html_path"; + FileHelper::write($html_path, $html_page); + reset_truncated_url_mappings(); + } + + # Generate summary reports + generate_summary_report(30, $stats_dir, $output_dir, $html_output_dir, %merged); + + # Generate index.gmi and index.html + generate_index($output_dir, $html_output_dir); + } + + # Sub: generate_summary_report + # - Purpose: Generate N-day rolling summary in .gmi (+.html except 365-day). + # - Params: $days (int), $stats_dir, $output_dir, $html_output_dir, %merged. + # - Return: undef. + sub generate_summary_report { + my ($days, $stats_dir, $output_dir, $html_output_dir, %merged) = @_; + + # Get the last N days of dates + my @dates = sort { $b cmp $a } keys %merged; + my $max_index = $days - 1; + @dates = @dates[ 0 .. $max_index ] if @dates > $days; + + my $today = localtime; + my $report_date = $today->strftime('%Y%m%d'); + + # Build report content + reset_truncated_url_mappings(); + my $report_content = build_report_header($today, $days); + + # Order: feed counts -> Top URLs -> daily top 3 for last 30 days -> other tables + $report_content .= build_feed_statistics_section(\@dates, \%merged); + $report_content .= build_feed_statistics_daily_average_section(\@dates, \%merged); + + # Aggregate and add top lists + my ($all_hosts, $all_urls) = aggregate_hosts_and_urls(\@dates, \%merged); + $report_content .= build_top_urls_section($all_urls, $days); + $report_content .= build_top3_urls_last_n_days_per_day($stats_dir, 30, \%merged); + $report_content .= build_top_hosts_section($all_hosts, $days); + $report_content .= build_daily_summary_section(\@dates, \%merged); + + # Add links to other summary reports + $report_content .= build_summary_links($days, $report_date); + + # Ensure output directory exists and write the summary report + mkdir $output_dir unless -d $output_dir; + + my $report_path = "$output_dir/${days}day_summary_$report_date.gmi"; + say "Writing $days-day summary report to $report_path"; + FileHelper::write($report_path, $report_content); + + # Also write HTML version, except for 365-day summaries (HTML suppressed) + if ($days != 365) { + mkdir $html_output_dir unless -d $html_output_dir; + my $html_path = "$html_output_dir/${days}day_summary_$report_date.html"; + my $html_content = gemtext_to_html($report_content); + my $html_page = generate_html_page("$days-Day Summary Report", $html_content); + say "Writing HTML $days-day summary report to $html_path"; + FileHelper::write($html_path, $html_page); + } + else { + say "Skipping HTML generation for 365-day summary (Gemtext only)"; + } + + reset_truncated_url_mappings(); + } + + sub build_feed_statistics_daily_average_section { + my ($dates, $merged) = @_; + + my %totals; + my $days_with_stats = 0; + + for my $date (@$dates) { + my $stats = $merged->{$date}; + next unless $stats->{feed_ips}; + $days_with_stats++; + + for my $key (keys %{ $stats->{feed_ips} }) { + $totals{$key} += $stats->{feed_ips}{$key}; + } + } + + return "" unless $days_with_stats > 0; + + my @avg_rows; + my $total_avg = 0; + my $has_total = 0; + + # Separate 'Total' from other keys + my @other_keys; + for my $key (keys %totals) { + if ($key eq 'Total') { + $total_avg = sprintf("%.2f", $totals{$key} / $days_with_stats); + $has_total = 1; + } + else { + push @other_keys, $key; + } + } + + # Sort other keys and create rows + for my $key (sort @other_keys) { + my $avg = sprintf("%.2f", $totals{$key} / $days_with_stats); + push @avg_rows, [ $key, $avg ]; + } + + # Add Total row at the end + push @avg_rows, [ 'Total', $total_avg ] if $has_total; + + my $content = "### Feed Statistics Daily Average (Last 30 Days)\n\n```\n"; + $content .= format_table([ 'Feed Type', 'Daily Average' ], \@avg_rows); + $content .= "\n```\n\n"; + + return $content; + } + + # Sub: build_report_header + # - Purpose: Header section for summary reports. + # - Params: $today (Time::Piece), $days (int default 30). + # - Return: gemtext string. + sub build_report_header { + my ($today, $days) = @_; + $days //= 30; # Default to 30 days for backward compatibility + + my $content = "# $days-Day Summary Report\n\n"; + $content .= "Generated on " . $today->strftime('%Y-%m-%d') . "\n\n"; + return $content; + } + + # Sub: build_daily_summary_section + # - Purpose: Table of daily total counts over a period. + # - Params: $dates (arrayref YYYYMMDD), $merged (hashref date=>stats). + # - Return: gemtext string. + sub build_daily_summary_section { + my ($dates, $merged) = @_; + + my $content = "## Daily Summary Evolution (Last 30 Days)\n\n"; + $content .= "### Total Requests by Day\n\n```\n"; + + my @summary_rows; + for my $date (reverse @$dates) { + my $stats = $merged->{$date}; + next unless $stats->{count}; + + push @summary_rows, build_daily_summary_row($date, $stats); + } + + $content .= format_table([ 'Date', 'Filtered', 'Gemini', 'Web', 'IPv4', 'IPv6', 'Total' ], + \@summary_rows); + $content .= "\n```\n\n"; + + return $content; + } + + # Sub: build_daily_summary_row + # - Purpose: Build one table row with counts for a date. + # - Params: $date (YYYYMMDD), $stats (hashref). + # - Return: arrayref of cell strings. + sub build_daily_summary_row { + my ($date, $stats) = @_; + + my ($year, $month, $day) = $date =~ /(\d{4})(\d{2})(\d{2})/; + my $formatted_date = "$year-$month-$day"; + + my $total_requests = ($stats->{count}{gemini} // 0) + ($stats->{count}{web} // 0); + my $filtered = $stats->{count}{filtered} // 0; + my $gemini = $stats->{count}{gemini} // 0; + my $web = $stats->{count}{web} // 0; + my $ipv4 = $stats->{count}{IPv4} // 0; + my $ipv6 = $stats->{count}{IPv6} // 0; + + return [ $formatted_date, $filtered, $gemini, $web, $ipv4, $ipv6, $total_requests ]; + } + + # Sub: build_feed_statistics_section + # - Purpose: Table of feed unique counts by day over a period. + # - Params: $dates (arrayref), $merged (hashref). + # - Return: gemtext string. + sub build_feed_statistics_section { + my ($dates, $merged) = @_; + + my $content = "### Feed Statistics Evolution\n\n```\n"; + + my @feed_rows; + for my $date (reverse @$dates) { + my $stats = $merged->{$date}; + next unless $stats->{feed_ips}; + + push @feed_rows, build_feed_statistics_row($date, $stats); + } + + $content .= + format_table([ 'Date', 'Gem Feed', 'Gem Atom', 'Web Feed', 'Web Atom', 'Total' ], + \@feed_rows); + $content .= "\n```\n\n"; + + return $content; + } + + # Sub: build_feed_statistics_row + # - Purpose: Build one row of feed unique counts for a date. + # - Params: $date (YYYYMMDD), $stats (hashref). + # - Return: arrayref of cell strings. + sub build_feed_statistics_row { + my ($date, $stats) = @_; + + my ($year, $month, $day) = $date =~ /(\d{4})(\d{2})(\d{2})/; + my $formatted_date = "$year-$month-$day"; + + return [ + $formatted_date, + $stats->{feed_ips}{'Gemini Gemfeed'} // 0, + $stats->{feed_ips}{'Gemini Atom'} // 0, + $stats->{feed_ips}{'Web Gemfeed'} // 0, + $stats->{feed_ips}{'Web Atom'} // 0, + $stats->{feed_ips}{'Total'} // 0 + ]; + } + + # Sub: aggregate_hosts_and_urls + # - Purpose: Sum hosts and URLs across multiple days. + # - Params: $dates (arrayref), $merged (hashref). + # - Return: (\%all_hosts, \%all_urls). + sub aggregate_hosts_and_urls { + my ($dates, $merged) = @_; + + my %all_hosts; + my %all_urls; + + for my $date (@$dates) { + my $stats = $merged->{$date}; + next unless $stats->{page_ips}; + + # Aggregate hosts + while (my ($host, $count) = each %{ $stats->{page_ips}{hosts} }) { + $all_hosts{$host} //= 0; + $all_hosts{$host} += $count; + } + + # Aggregate URLs + while (my ($url, $count) = each %{ $stats->{page_ips}{urls} }) { + $all_urls{$url} //= 0; + $all_urls{$url} += $count; + } + } + + return (\%all_hosts, \%all_urls); + } + + sub build_top_hosts_section { + my ($all_hosts, $days) = @_; + $days //= 30; + + return generate_top_n_table( + title => "Top 50 Hosts (${days}-Day Total)", + data => $all_hosts, + headers => [ 'Host', 'Visitors' ], + ); + } + + # Sub: build_top_urls_section + # - Purpose: Build Top-50 URLs table for the aggregated period (with truncation). + # - Params: $all_urls (hashref), $days (int default 30). + # - Return: gemtext string. + sub build_top_urls_section { + my ($all_urls, $days) = @_; + $days //= 30; + + return generate_top_n_table( + title => "Top 50 URLs (${days}-Day Total)", + data => $all_urls, + headers => [ 'URL', 'Visitors' ], + is_url => 1, + ); + } + + # Sub: build_summary_links + # - Purpose: Links to other summary reports (30-day when not already on it). + # - Params: $current_days (int), $report_date (YYYYMMDD). + # - Return: gemtext string. + sub build_summary_links { + my ($current_days, $report_date) = @_; + + my $content = ''; + + # Only add link to 30-day summary when not on the 30-day report itself + if ($current_days != 30) { + $content .= "## Other Summary Reports\n\n"; + $content .= "=> ./30day_summary_$report_date.gmi 30-Day Summary Report\n\n"; + } + + return $content; + } + + # Sub: build_top3_urls_last_n_days_per_day + # - Purpose: For each of last N days, render the top URLs table. + # - Params: $stats_dir (str), $days (int default 30), $merged (hashref). + # - Return: gemtext string. + sub build_top3_urls_last_n_days_per_day { + my ($stats_dir, $days, $merged) = @_; + $days //= 30; + my $content = "## Top 5 URLs Per Day (Last ${days} Days)\n\n"; + + my @all = DateHelper::last_month_dates(); + my @dates = @all; + @dates = @all[ 0 .. $days - 1 ] if @all > $days; + return $content . "(no data)\n\n" unless @dates; + + for my $date (@dates) { + + # Prefer in-memory merged stats if available; otherwise merge from disk + my $stats = $merged->{$date}; + if (!$stats || !($stats->{page_ips} && $stats->{page_ips}{urls})) { + $stats = Foostats::Merger::merge_for_date($stats_dir, $date); + } + next unless $stats && $stats->{page_ips} && $stats->{page_ips}{urls}; + + my ($y, $m, $d) = $date =~ /(\d{4})(\d{2})(\d{2})/; + $content .= "### $y-$m-$d\n\n"; + + my $urls = $stats->{page_ips}{urls}; + my @sorted = sort { ($urls->{$b} // 0) <=> ($urls->{$a} // 0) } keys %$urls; + next unless @sorted; + my $limit = @sorted < 5 ? @sorted : 5; + @sorted = @sorted[ 0 .. $limit - 1 ]; + + my @rows; + for my $u (@sorted) { + $u =~ s/\.gmi$/\.html/; + push @rows, [ $u, $urls->{$u} // 0 ]; + } + truncate_urls_for_table(\@rows, 'Visitors'); + $content .= "```\n" . format_table([ 'URL', 'Visitors' ], \@rows) . "\n```\n\n"; + } + + return $content; + } + + # Sub: generate_index + # - Purpose: Create index.gmi/.html using the latest 30-day summary as content. + # - Params: $output_dir (str), $html_output_dir (str). + # - Return: undef. + sub generate_index { + my ($output_dir, $html_output_dir) = @_; + + # Find latest 30-day summary + opendir(my $dh, $output_dir) or die "Cannot open directory $output_dir: $!"; + my @gmi_files = grep { /\.gmi$/ && $_ ne 'index.gmi' } readdir($dh); + closedir($dh); + + my @summaries_30day = sort { $b cmp $a } grep { /^30day_summary_/ } @gmi_files; + my $latest_30 = $summaries_30day[0]; + + my $index_path = "$output_dir/index.gmi"; + mkdir $html_output_dir unless -d $html_output_dir; + my $html_path = "$html_output_dir/index.html"; + + if ($latest_30) { + + # Read 30-day summary content and use it as index + my $summary_path = "$output_dir/$latest_30"; + open my $sfh, '<', $summary_path or die "$summary_path: $!"; + local $/ = undef; + my $content = <$sfh>; + close $sfh; + + say "Writing index to $index_path (using $latest_30)"; + FileHelper::write($index_path, $content); + + # HTML: use existing 30-day summary HTML if present, else convert + (my $latest_html = $latest_30) =~ s/\.gmi$/.html/; + my $summary_html_path = "$html_output_dir/$latest_html"; + if (-e $summary_html_path) { + open my $hh, '<', $summary_html_path or die "$summary_html_path: $!"; + local $/ = undef; + my $html_page = <$hh>; + close $hh; + say "Writing HTML index to $html_path (copy of $latest_html)"; + FileHelper::write($html_path, $html_page); + } + else { + my $html_content = gemtext_to_html($content); + my $html_page = generate_html_page("30-Day Summary Report", $html_content); + say "Writing HTML index to $html_path (from gemtext)"; + FileHelper::write($html_path, $html_page); + } + return; + } + + # Fallback: minimal index if no 30-day summary found + my $fallback = "# Foostats Reports Index\n\n30-day summary not found.\n"; + say "Writing fallback index to $index_path"; + FileHelper::write($index_path, $fallback); + + my $html_content = gemtext_to_html($fallback); + my $html_page = generate_html_page("Foostats Reports Index", $html_content); + say "Writing fallback HTML index to $html_path"; + FileHelper::write($html_path, $html_page); + } +} + +package main; + +# Package: main — CLI entrypoint and orchestration +# - Purpose: Parse options and invoke parse/replicate/report flows. +use Getopt::Long; +use Sys::Hostname; + +# Sub: usage +# - Purpose: Print usage and exit 0. +# - Params: none. +# - Return: never (exits). +sub usage { + print <<~"USAGE"; + Usage: $0 [options] + + Options: + --parse-logs Parse web and gemini logs. + --replicate Replicate stats from partner node. + --report Generate a report from the stats. + --all Perform all of the above actions (parse, replicate, report). + --stats-dir <path> Directory to store stats files. + Default: /var/www/htdocs/buetow.org/self/foostats + --output-dir <path> Directory to write .gmi report files. + Default: /var/gemini/stats.foo.zone + --html-output-dir <path> Directory to write .html report files. + Default: /var/www/htdocs/gemtexter/stats.foo.zone + --odds-file <path> File with odd URI patterns to filter. + Default: <stats-dir>/fooodds.txt + --filter-log <path> Log file for filtered requests. + Default: /var/log/fooodds + --partner-node <hostname> Hostname of the partner node for replication. + Default: fishfinger.buetow.org or blowfish.buetow.org + --version Show version information. + --help Show this help message. + USAGE + exit 0; +} + +# Sub: parse_logs +# - Purpose: Parse logs and persist aggregated stats files under $stats_dir. +# - Params: $stats_dir (str), $odds_file (str), $odds_log (str). +# - Return: undef. +sub parse_logs ($stats_dir, $odds_file, $odds_log) { + my $out = Foostats::FileOutputter->new(stats_dir => $stats_dir); + + $out->{stats} = Foostats::Logreader::parse_logs( + $out->last_processed_date('web'), + $out->last_processed_date('gemini'), + $odds_file, $odds_log + ); + + $out->write; +} + +# Sub: foostats_main +# - Purpose: Option parsing and execution of requested actions. +# - Params: none (reads @ARGV). +# - Return: exit code via program termination. +sub foostats_main { + my ($parse_logs, $replicate, $report, $all, $help, $version); + + # With default values + my $stats_dir = '/var/www/htdocs/buetow.org/self/foostats'; + my $odds_file = $stats_dir . '/fooodds.txt'; + my $odds_log = '/var/log/fooodds'; + my $output_dir; # Will default to $stats_dir/gemtext if not specified + my $html_output_dir; # Will default to /var/www/htdocs/gemtexter/stats.foo.zone if not specified + my $partner_node = + hostname eq 'fishfinger.buetow.org' + ? 'blowfish.buetow.org' + : 'fishfinger.buetow.org'; + + GetOptions + 'parse-logs!' => \$parse_logs, + 'filter-log=s' => \$odds_log, + 'odds-file=s' => \$odds_file, + 'replicate!' => \$replicate, + 'report!' => \$report, + 'all!' => \$all, + 'stats-dir=s' => \$stats_dir, + 'output-dir=s' => \$output_dir, + 'html-output-dir=s' => \$html_output_dir, + 'partner-node=s' => \$partner_node, + 'version' => \$version, + 'help|?' => \$help; + + if ($version) { + print "foostats " . VERSION . "\n"; + exit 0; + } + + usage() if $help; + + parse_logs($stats_dir, $odds_file, $odds_log) if $parse_logs or $all; + Foostats::Replicator::replicate($stats_dir, $partner_node) if $replicate or $all; + + # Set default output directories if not specified + $output_dir //= '/var/gemini/stats.foo.zone'; + $html_output_dir //= '/var/www/htdocs/gemtexter/stats.foo.zone'; + + Foostats::Reporter::report($stats_dir, $output_dir, $html_output_dir, + Foostats::Merger::merge($stats_dir)) + if $report + or $all; +} + +# Only run main flow when executed as a script, not when required (e.g., tests) +foostats_main() unless caller; diff --git a/gemfeed/examples/conf/frontends/scripts/gemtexter.sh.tpl b/gemfeed/examples/conf/frontends/scripts/gemtexter.sh.tpl new file mode 100644 index 00000000..2bba20c7 --- /dev/null +++ b/gemfeed/examples/conf/frontends/scripts/gemtexter.sh.tpl @@ -0,0 +1,65 @@ +#!/bin/sh + +PATH=$PATH:/usr/local/bin + +function ensure_site { + dir=$1 + repo=$2 + branch=$3 + + basename=$(basename $dir) + parent=$(dirname $dir) + + if [ ! -d $parent ]; then + mkdir -p $parent + fi + + cd $parent + if [ ! -e www.$basename ]; then + ln -s $basename www.$basename + fi + + if [ ! -e standby.$basename ]; then + ln -s $basename standby.$basename + fi + + if [ ! -d $basename ]; then + git clone $repo -b $branch --single-branch $basename + else + cd $basename + git pull + fi +} + +function ensure_links { + dir=$1 + target=$2 + + basename=$(basename $dir) + parent=$(dirname $dir) + + cd $parent + + if [ ! -e $target ]; then + ln -s $basename $target + fi + + if [ ! -e www.$target ]; then + ln -s $basename www.$target + fi + + if [ ! -e standby.$target ]; then + ln -s $basename standby.$target + fi +} + +for site in foo.zone; do + ensure_site \ + /var/gemini/$site \ + https://codeberg.org/snonux/$site \ + content-gemtext + ensure_site \ + /var/www/htdocs/gemtexter/$site \ + https://codeberg.org/snonux/$site \ + content-html +done diff --git a/gemfeed/examples/conf/frontends/scripts/rsync.sh.tpl b/gemfeed/examples/conf/frontends/scripts/rsync.sh.tpl new file mode 100644 index 00000000..c8d7b004 --- /dev/null +++ b/gemfeed/examples/conf/frontends/scripts/rsync.sh.tpl @@ -0,0 +1,8 @@ +#!/bin/sh + +PATH=$PATH:/usr/local/bin + +# Sync Joern's content over to Fishfinger! +if [ `hostname -s` = fishfinger ]; then + rsync -av --delete rsync://blowfish.wg0.wan.buetow.org/joernshtdocs/ /var/www/htdocs/joern/ +fi diff --git a/gemfeed/examples/conf/frontends/scripts/taskwarrior.sh.tpl b/gemfeed/examples/conf/frontends/scripts/taskwarrior.sh.tpl new file mode 100644 index 00000000..aaafbe98 --- /dev/null +++ b/gemfeed/examples/conf/frontends/scripts/taskwarrior.sh.tpl @@ -0,0 +1,5 @@ +PATH=$PATH:/usr/local/bin + +echo "Any tasks due before the next 14 days?" +# Using git user, as ssh keys are already there to sync the task db! +su - git -c '/usr/local/bin/task rc:/etc/taskrc due.before:14day minimal 2>/dev/null' diff --git a/gemfeed/examples/conf/frontends/var/nsd/etc/key.conf.tpl b/gemfeed/examples/conf/frontends/var/nsd/etc/key.conf.tpl new file mode 100644 index 00000000..d8d6c76d --- /dev/null +++ b/gemfeed/examples/conf/frontends/var/nsd/etc/key.conf.tpl @@ -0,0 +1,4 @@ +key: + name: blowfish.buetow.org + algorithm: hmac-sha256 + secret: "<%= $nsd_key %>" diff --git a/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.master.tpl b/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.master.tpl new file mode 100644 index 00000000..7f5ba56f --- /dev/null +++ b/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.master.tpl @@ -0,0 +1,17 @@ +include: "/var/nsd/etc/key.conf" + +server: + hide-version: yes + verbosity: 1 + database: "" # disable database + debug-mode: no + +remote-control: + control-enable: yes + control-interface: /var/run/nsd.sock + +<% for my $zone (@$dns_zones) { %> +zone: + name: "<%= $zone %>" + zonefile: "master/<%= $zone %>.zone" +<% } %> diff --git a/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.slave.tpl b/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.slave.tpl new file mode 100644 index 00000000..d9d93fe6 --- /dev/null +++ b/gemfeed/examples/conf/frontends/var/nsd/etc/nsd.conf.slave.tpl @@ -0,0 +1,17 @@ +include: "/var/nsd/etc/key.conf" + +server: + hide-version: yes + verbosity: 1 + database: "" # disable database + +remote-control: + control-enable: yes + control-interface: /var/run/nsd.sock + +<% for my $zone (@$dns_zones) { %> +zone: + name: "<%= $zone %>" + allow-notify: 23.88.35.144 blowfish.buetow.org + request-xfr: 23.88.35.144 blowfish.buetow.org +<% } %> diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/buetow.org.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/buetow.org.zone.tpl new file mode 100644 index 00000000..0a0fb36f --- /dev/null +++ b/gemfeed/examples/conf/frontends/var/nsd/zones/master/buetow.org.zone.tpl @@ -0,0 +1,124 @@ +$ORIGIN buetow.org. +$TTL 4h +@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( + <%= time() %> ; serial + 1h ; refresh + 30m ; retry + 7d ; expire + 1h ) ; negative + IN NS fishfinger.buetow.org. + IN NS blowfish.buetow.org. + + 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover + 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover +master 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +master 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover + + IN MX 10 fishfinger.buetow.org. + IN MX 20 blowfish.buetow.org. + +cool IN NS ns-75.awsdns-09.com. +cool IN NS ns-707.awsdns-24.net. +cool IN NS ns-1081.awsdns-07.org. +cool IN NS ns-1818.awsdns-35.co.uk. + +paul 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +paul 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www.paul 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.paul 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.paul 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby.paul 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover + +blog 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +blog 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www.blog 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.blog 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.blog 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby.blog 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover + +tmp 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +tmp 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www.tmp 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.tmp 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.tmp 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby.tmp 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover + +<% for my $host (@$f3s_hosts) { -%> +<%= $host %>. 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +<%= $host %>. 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www.<%= $host %>. 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.<%= $host %>. 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.<%= $host %>. 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby.<%= $host %>. 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover +<% } -%> + +; So joern can directly preview the content before rsync happens from blowfish to fishfinger +joern IN CNAME blowfish +www.joern IN CNAME blowfish +standby.joern IN CNAME fishfinger + +dory 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +dory 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www.dory 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.dory 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.dory 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby.dory 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover + +ecat 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +ecat 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www.ecat 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.ecat 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.ecat 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby.ecat 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover + +fotos 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +fotos 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www.fotos 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.fotos 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.fotos 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby.fotos 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover + +git 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +git 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www.git 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.git 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.git 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby.git 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover + +blowfish 14400 IN A 23.88.35.144 +blowfish 14400 IN AAAA 2a01:4f8:c17:20f1::42 +blowfish IN MX 10 fishfinger.buetow.org. +blowfish IN MX 20 blowfish.buetow.org. +fishfinger 14400 IN A 46.23.94.99 +fishfinger 14400 IN AAAA 2a03:6000:6f67:624::99 +fishfinger IN MX 10 fishfinger.buetow.org. +fishfinger IN MX 20 blowfish.buetow.org. + +git1 1800 IN CNAME blowfish.buetow.org. +git2 1800 IN CNAME fishfinger.buetow.org. + +zapad.sofia 14400 IN CNAME 79-100-3-54.ip.btc-net.bg. +www2 14400 IN CNAME snonux.codeberg.page. +znc 1800 IN CNAME fishfinger.buetow.org. +www.znc 1800 IN CNAME fishfinger.buetow.org. +standby.znc 1800 IN CNAME fishfinger.buetow.org. +bnc 1800 IN CNAME fishfinger.buetow.org. +www.bnc 1800 IN CNAME fishfinger.buetow.org. + +protonmail._domainkey.paul IN CNAME protonmail.domainkey.d4xua2siwqfhvecokhuacmyn5fyaxmjk6q3hu2omv2z43zzkl73yq.domains.proton.ch. +protonmail2._domainkey.paul IN CNAME protonmail2.domainkey.d4xua2siwqfhvecokhuacmyn5fyaxmjk6q3hu2omv2z43zzkl73yq.domains.proton.ch. +protonmail3._domainkey.paul IN CNAME protonmail3.domainkey.d4xua2siwqfhvecokhuacmyn5fyaxmjk6q3hu2omv2z43zzkl73yq.domains.proton.ch. +paul IN TXT protonmail-verification=a42447901e320064d13e536db4d73ce600d715b7 +paul IN TXT v=spf1 include:_spf.protonmail.ch mx ~all +paul IN TXT v=DMARC1; p=none +paul IN MX 10 mail.protonmail.ch. +paul IN MX 20 mailsec.protonmail.ch. +paul IN MX 42 blowfish.buetow.org. +paul IN MX 42 fishfinger.buetow.org. + +* IN MX 10 fishfinger.buetow.org. +* IN MX 20 blowfish.buetow.org. diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/dtail.dev.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/dtail.dev.zone.tpl new file mode 100644 index 00000000..d5196e04 --- /dev/null +++ b/gemfeed/examples/conf/frontends/var/nsd/zones/master/dtail.dev.zone.tpl @@ -0,0 +1,21 @@ +$ORIGIN dtail.dev. +$TTL 4h +@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( + <%= time() %> ; serial + 1h ; refresh + 30m ; retry + 7d ; expire + 1h ) ; negative + IN NS fishfinger.buetow.org. + IN NS blowfish.buetow.org. + + IN MX 10 fishfinger.buetow.org. + IN MX 20 blowfish.buetow.org. + + 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover + 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover +github 86400 IN CNAME mimecast.github.io. diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/foo.zone.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/foo.zone.zone.tpl new file mode 100644 index 00000000..d0755c91 --- /dev/null +++ b/gemfeed/examples/conf/frontends/var/nsd/zones/master/foo.zone.zone.tpl @@ -0,0 +1,34 @@ +$ORIGIN foo.zone. +$TTL 4h +@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( + <%= time() %> ; serial + 1h ; refresh + 30m ; retry + 7d ; expire + 1h ) ; negative + IN NS fishfinger.buetow.org. + IN NS blowfish.buetow.org. + + IN MX 10 fishfinger.buetow.org. + IN MX 20 blowfish.buetow.org. + + 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover + 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover + +f3s 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +f3s 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www.f3s 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.f3s 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.f3s 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby.f3s 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover + +stats 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +stats 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www.stats 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.stats 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.stats 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +standby.stats 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/irregular.ninja.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/irregular.ninja.zone.tpl new file mode 100644 index 00000000..d4f3d622 --- /dev/null +++ b/gemfeed/examples/conf/frontends/var/nsd/zones/master/irregular.ninja.zone.tpl @@ -0,0 +1,23 @@ +$ORIGIN irregular.ninja. +$TTL 4h +@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( + <%= time() %> ; serial + 1h ; refresh + 30m ; retry + 7d ; expire + 1h ) ; negative + IN NS fishfinger.buetow.org. + IN NS blowfish.buetow.org. + + 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover + 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover +www.alt 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www.alt 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +alt 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +alt 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby.alt 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby.alt 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/paul.cyou.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/paul.cyou.zone.tpl new file mode 100644 index 00000000..fdffef4f --- /dev/null +++ b/gemfeed/examples/conf/frontends/var/nsd/zones/master/paul.cyou.zone.tpl @@ -0,0 +1,20 @@ +$ORIGIN paul.cyou. +$TTL 4h +@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( + <%= time() %> ; serial + 1h ; refresh + 30m ; retry + 7d ; expire + 1h ) ; negative + IN NS fishfinger.buetow.org. + IN NS blowfish.buetow.org. + + IN MX 10 fishfinger.buetow.org. + IN MX 20 blowfish.buetow.org. + + 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover + 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover diff --git a/gemfeed/examples/conf/frontends/var/nsd/zones/master/snonux.foo.zone.tpl b/gemfeed/examples/conf/frontends/var/nsd/zones/master/snonux.foo.zone.tpl new file mode 100644 index 00000000..a9d002ae --- /dev/null +++ b/gemfeed/examples/conf/frontends/var/nsd/zones/master/snonux.foo.zone.tpl @@ -0,0 +1,20 @@ +$ORIGIN snonux.foo. +$TTL 4h +@ IN SOA fishfinger.buetow.org. hostmaster.buetow.org. ( + <%= time() %> ; serial + 1h ; refresh + 30m ; retry + 7d ; expire + 1h ) ; negative + IN NS fishfinger.buetow.org. + IN NS blowfish.buetow.org. + + IN MX 10 fishfinger.buetow.org. + IN MX 20 blowfish.buetow.org. + + 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover + 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +www 300 IN A <%= $ips->{current_master}{ipv4} %> ; Enable failover +www 300 IN AAAA <%= $ips->{current_master}{ipv6} %> ; Enable failover +standby 300 IN A <%= $ips->{current_standby}{ipv4} %> ; Enable failover +standby 300 IN AAAA <%= $ips->{current_standby}{ipv6} %> ; Enable failover diff --git a/gemfeed/examples/conf/frontends/var/www/htdocs/buetow.org/self/index.txt.tpl b/gemfeed/examples/conf/frontends/var/www/htdocs/buetow.org/self/index.txt.tpl new file mode 100644 index 00000000..6b8979da --- /dev/null +++ b/gemfeed/examples/conf/frontends/var/www/htdocs/buetow.org/self/index.txt.tpl @@ -0,0 +1 @@ +Welcome to <%= $hostname.'.'.$domain %>! diff --git a/gemfeed/examples/conf/playground/README.md b/gemfeed/examples/conf/playground/README.md new file mode 100644 index 00000000..0ed0975c --- /dev/null +++ b/gemfeed/examples/conf/playground/README.md @@ -0,0 +1,3 @@ +# Playground + +Some playground/testing with Rex! diff --git a/gemfeed/examples/conf/playground/Rexfile b/gemfeed/examples/conf/playground/Rexfile new file mode 100644 index 00000000..056a82e8 --- /dev/null +++ b/gemfeed/examples/conf/playground/Rexfile @@ -0,0 +1,24 @@ +use Rex -feature => ['1.14', 'exec_autodie']; +use Rex::Logger; +use Rex::Commands::Cron; + +group openbsd_canary => 'blowfish.buetow.org:2'; + +user 'rex'; +sudo TRUE; + +parallelism 5; + +desc 'Cron test'; +task 'openbsd_cron_test', group => 'openbsd_canary', sub { + cron add => '_gogios', { + minute => '5', + hour => '*', + day_of_month => '*', + month => '*', + day_of_week => '*', + command => '/path/to/your/cronjob', + }; +}; + +# vim: syntax=perl diff --git a/gemfeed/examples/conf/playground/openbsd_cron_test.debug.txt b/gemfeed/examples/conf/playground/openbsd_cron_test.debug.txt new file mode 100644 index 00000000..30fd1c09 --- /dev/null +++ b/gemfeed/examples/conf/playground/openbsd_cron_test.debug.txt @@ -0,0 +1,766 @@ +[paul@earth]~/git/rexfiles/testing% rex -m -d openbsd_cron_test &> openbsd_cron_test.debug.txt +[2023-07-30 13:36:36] DEBUG - This is Rex version: 1.14.2 +[2023-07-30 13:36:36] DEBUG - Command Line Parameters +[2023-07-30 13:36:36] DEBUG - m = 1 +[2023-07-30 13:36:36] DEBUG - d = 1 +[2023-07-30 13:36:36] DEBUG - Creating lock-file (Rexfile.lock) +[2023-07-30 13:36:36] DEBUG - Loading Rexfile +[2023-07-30 13:36:36] DEBUG - Disabling usage of a tty +[2023-07-30 13:36:36] DEBUG - Activating autodie. +[2023-07-30 13:36:36] DEBUG - Using Net::OpenSSH if present. +[2023-07-30 13:36:36] DEBUG - Add service check. +[2023-07-30 13:36:36] DEBUG - Setting set() to not append data. +[2023-07-30 13:36:36] DEBUG - Registering CMDB as template variables. +[2023-07-30 13:36:36] DEBUG - activating featureset >= 0.51 +[2023-07-30 13:36:36] DEBUG - activating featureset >= 0.40 +[2023-07-30 13:36:36] DEBUG - activating featureset >= 0.35 +[2023-07-30 13:36:36] DEBUG - activating featureset >= 0.31 +[2023-07-30 13:36:36] DEBUG - Enabling exec_autodie +[2023-07-30 13:36:36] DEBUG - Turning sudo globally on +[2023-07-30 13:36:36] DEBUG - Creating new distribution class of type: Base +[2023-07-30 13:36:36] DEBUG - new distribution class of type Rex::TaskList::Base created. +[2023-07-30 13:36:36] DEBUG - Creating task: openbsd_cron_test +[2023-07-30 13:36:36] DEBUG - Found Net::OpenSSH and Net::SFTP::Foreign - using it as default +[2023-07-30 13:36:36] DEBUG - Registering task: openbsd_cron_test +[2023-07-30 13:36:36] DEBUG - Initializing Logger from parameters found in Rexfile +[2023-07-30 13:36:36] DEBUG - Returning existing distribution class of type: Rex::TaskList::Base +[2023-07-30 13:36:36] DEBUG - Returning existing distribution class of type: Rex::TaskList::Base +[2023-07-30 13:36:36] DEBUG - Waiting for children to finish +[2023-07-30 13:36:36] INFO - Running task openbsd_cron_test on blowfish.buetow.org:2 +[2023-07-30 13:36:36] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:36] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:36] DEBUG - $VAR1 = ''; + +[2023-07-30 13:36:36] DEBUG - Auth-Information inside Task: +[2023-07-30 13:36:36] DEBUG - password => [[%s]] +[2023-07-30 13:36:36] DEBUG - auth_type => [[try]] +[2023-07-30 13:36:36] DEBUG - public_key => [[]] +[2023-07-30 13:36:36] DEBUG - sudo => [[]] +[2023-07-30 13:36:36] DEBUG - sudo_password => [[**********]] +[2023-07-30 13:36:36] DEBUG - port => [[]] +[2023-07-30 13:36:36] DEBUG - user => [[rex]] +[2023-07-30 13:36:36] DEBUG - private_key => [[]] +[2023-07-30 13:36:36] DEBUG - Using Net::OpenSSH for connection +[2023-07-30 13:36:36] DEBUG - Using user: rex +[2023-07-30 13:36:36] DEBUG - Connecting to blowfish.buetow.org:2 (rex) +[2023-07-30 13:36:36] DEBUG - get_openssh_opt() +[2023-07-30 13:36:36] DEBUG - $VAR1 = {}; + +[2023-07-30 13:36:36] DEBUG - OpenSSH: key_auth or not defined: blowfish.buetow.org:2 - rex +[2023-07-30 13:36:36] DEBUG - OpenSSH options: +[2023-07-30 13:36:36] DEBUG - $VAR1 = [ + 'blowfish.buetow.org', + 'user', + 'rex', + 'port', + '2', + 'master_opts', + [ + '-o', + 'LogLevel=QUIET', + '-o', + 'ConnectTimeout=2' + ], + 'default_ssh_opts', + $VAR1->[6] + ]; + +[2023-07-30 13:36:36] DEBUG - OpenSSH constructor options: +[2023-07-30 13:36:36] DEBUG - $VAR1 = {}; + +[2023-07-30 13:36:36] DEBUG - Trying following auth types: +[2023-07-30 13:36:36] DEBUG - $VAR1 = [ + 'key', + 'pass' + ]; + +[2023-07-30 13:36:36] DEBUG - Current Error-Code: 0 +[2023-07-30 13:36:36] DEBUG - Connected and authenticated to blowfish.buetow.org. +[2023-07-30 13:36:37] DEBUG - Successfully authenticated on blowfish.buetow.org:2. +[2023-07-30 13:36:37] DEBUG - Executing: perl -MFile::Spec -le 'print File::Spec->tmpdir' +[2023-07-30 13:36:37] DEBUG - Detecting shell... +[2023-07-30 13:36:37] DEBUG - Searching for shell: zsh +[2023-07-30 13:36:37] DEBUG - Searching for shell: ksh +[2023-07-30 13:36:37] DEBUG - Found shell and using: ksh +[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:37] DEBUG - $VAR1 = {}; + +[2023-07-30 13:36:37] DEBUG - SSH/executing: LC_ALL=C ; export LC_ALL; perl -MFile::Spec -le 'print File::Spec->tmpdir' +[2023-07-30 13:36:37] DEBUG - /tmp + +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:37] DEBUG - Sudo: Executing: which perl +[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:37] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S', + 'fail_ok' => 1, + 'valid_retval' => [ + 0 + ] + }; + +[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which perl ' +[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which perl ' +[2023-07-30 13:36:37] DEBUG - /usr/bin/perl + +[2023-07-30 13:36:37] DEBUG - Executing openbsd_cron_test +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:37] DEBUG - Sudo: Executing: which lsb_release +[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:37] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S', + 'fail_ok' => 1, + 'valid_retval' => [ + 0 + ] + }; + +[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which lsb_release ' +[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which lsb_release ' +[2023-07-30 13:36:37] DEBUG - ========= ERR ============ +[2023-07-30 13:36:37] DEBUG - which: lsb_release: Command not found. + +[2023-07-30 13:36:37] DEBUG - ========= ERR ============ +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:37] DEBUG - Sudo: Executing: test -d c:/ +[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:37] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d c:/ ' +[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d c:/ ' +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:37] DEBUG - Sudo: Executing: test -e /etc/system-release +[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:37] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/system-release ' +[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/system-release ' +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:37] DEBUG - Sudo: Executing: test -d /etc/system-release +[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:37] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/system-release ' +[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/system-release ' +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:37] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:37] DEBUG - Sudo: Executing: test -e /etc/debian_version +[2023-07-30 13:36:37] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:37] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:37] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/debian_version ' +[2023-07-30 13:36:37] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/debian_version ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/debian_version +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/debian_version ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/debian_version ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -e /etc/SuSE-release +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SuSE-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SuSE-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/SuSE-release +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SuSE-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SuSE-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -e /etc/SUSE-brand +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SUSE-brand ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SUSE-brand ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/SUSE-brand +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SUSE-brand ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SUSE-brand ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -e /etc/mageia-release +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/mageia-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/mageia-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/mageia-release +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/mageia-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/mageia-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -e /etc/fedora-release +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/fedora-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/fedora-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/fedora-release +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/fedora-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/fedora-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -e /etc/gentoo-release +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/gentoo-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/gentoo-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:38] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:38] DEBUG - Sudo: Executing: test -d /etc/gentoo-release +[2023-07-30 13:36:38] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:38] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:38] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/gentoo-release ' +[2023-07-30 13:36:38] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/gentoo-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -e /etc/altlinux-release +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/altlinux-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/altlinux-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -d /etc/altlinux-release +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/altlinux-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/altlinux-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -e /etc/redhat-release +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/redhat-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/redhat-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -d /etc/redhat-release +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/redhat-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/redhat-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -e /etc/openwrt_release +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/openwrt_release ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/openwrt_release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -d /etc/openwrt_release +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/openwrt_release ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/openwrt_release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -e /etc/arch-release +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/arch-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/arch-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -d /etc/arch-release +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/arch-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/arch-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -e /etc/manjaro-release +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/manjaro-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/manjaro-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: test -d /etc/manjaro-release +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/manjaro-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/manjaro-release ' +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:39] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:39] DEBUG - Sudo: Executing: uname -s +[2023-07-30 13:36:39] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:39] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S', + 'fail_ok' => 0, + 'valid_retval' => [ + 0 + ] + }; + +[2023-07-30 13:36:39] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; uname -s ' +[2023-07-30 13:36:39] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; uname -s ' +[2023-07-30 13:36:40] DEBUG - OpenBSD + +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: which lsb_release +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S', + 'fail_ok' => 1, + 'valid_retval' => [ + 0 + ] + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which lsb_release ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; which lsb_release ' +[2023-07-30 13:36:40] DEBUG - ========= ERR ============ +[2023-07-30 13:36:40] DEBUG - which: lsb_release: Command not found. + +[2023-07-30 13:36:40] DEBUG - ========= ERR ============ +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -d c:/ +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d c:/ ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d c:/ ' +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -e /etc/system-release +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/system-release ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/system-release ' +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -d /etc/system-release +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/system-release ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/system-release ' +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -e /etc/debian_version +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/debian_version ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/debian_version ' +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -d /etc/debian_version +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/debian_version ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/debian_version ' +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -e /etc/SuSE-release +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SuSE-release ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SuSE-release ' +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -d /etc/SuSE-release +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SuSE-release ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SuSE-release ' +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -e /etc/SUSE-brand +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SUSE-brand ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/SUSE-brand ' +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -d /etc/SUSE-brand +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SUSE-brand ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/SUSE-brand ' +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:40] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:40] DEBUG - Sudo: Executing: test -e /etc/mageia-release +[2023-07-30 13:36:40] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:40] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:40] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/mageia-release ' +[2023-07-30 13:36:40] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/mageia-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -d /etc/mageia-release +[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:41] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/mageia-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/mageia-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -e /etc/fedora-release +[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:41] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/fedora-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/fedora-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -d /etc/fedora-release +[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:41] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/fedora-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/fedora-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -e /etc/gentoo-release +[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:41] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/gentoo-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/gentoo-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -d /etc/gentoo-release +[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:41] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/gentoo-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/gentoo-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -e /etc/altlinux-release +[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:41] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/altlinux-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/altlinux-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -d /etc/altlinux-release +[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:41] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/altlinux-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/altlinux-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -e /etc/redhat-release +[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:41] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/redhat-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/redhat-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -d /etc/redhat-release +[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:41] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/redhat-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/redhat-release ' +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:41] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:41] DEBUG - Sudo: Executing: test -e /etc/openwrt_release +[2023-07-30 13:36:41] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:41] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:41] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/openwrt_release ' +[2023-07-30 13:36:41] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/openwrt_release ' +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:42] DEBUG - Sudo: Executing: test -d /etc/openwrt_release +[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:42] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/openwrt_release ' +[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/openwrt_release ' +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:42] DEBUG - Sudo: Executing: test -e /etc/arch-release +[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:42] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/arch-release ' +[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/arch-release ' +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:42] DEBUG - Sudo: Executing: test -d /etc/arch-release +[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:42] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/arch-release ' +[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/arch-release ' +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:42] DEBUG - Sudo: Executing: test -e /etc/manjaro-release +[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:42] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/manjaro-release ' +[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -e /etc/manjaro-release ' +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:42] DEBUG - Sudo: Executing: test -d /etc/manjaro-release +[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:42] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/manjaro-release ' +[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; test -d /etc/manjaro-release ' +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:42] DEBUG - Sudo: Executing: uname -s +[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:42] DEBUG - $VAR1 = { + 'valid_retval' => [ + 0 + ], + 'fail_ok' => 0, + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; uname -s ' +[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; uname -s ' +[2023-07-30 13:36:42] DEBUG - OpenBSD + +[2023-07-30 13:36:42] DEBUG - Detecting shell... +[2023-07-30 13:36:42] DEBUG - Found shell in cache: ksh +[2023-07-30 13:36:42] DEBUG - Detecting shell... +[2023-07-30 13:36:42] DEBUG - Found shell in cache: ksh +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:42] DEBUG - Sudo: Executing: perl -e 'print scalar getpwuid($<)' +[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:42] DEBUG - $VAR1 = { + 'fail_ok' => 0, + 'valid_retval' => [ + 0 + ], + 'prepend_command' => 'sudo -p \'\' -S' + }; + +[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; perl -e '\''print scalar getpwuid($<)'\'' ' +[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; perl -e '\''print scalar getpwuid($<)'\'' ' +[2023-07-30 13:36:42] DEBUG - root +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (private_key): returning +[2023-07-30 13:36:42] DEBUG - Rex::Group::Entry::Server (public_key): returning +[2023-07-30 13:36:42] DEBUG - Sudo: Executing: ( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null ; cat /tmp/umkmfvxctxjg.tmp ; rm /tmp/umkmfvxctxjg.tmp +[2023-07-30 13:36:42] DEBUG - Shell/Bash: Got options: +[2023-07-30 13:36:42] DEBUG - $VAR1 = { + 'prepend_command' => 'sudo -p \'\' -S', + 'valid_retval' => [ + 0 + ], + 'fail_ok' => 0 + }; + +[2023-07-30 13:36:42] DEBUG - sudo: exec: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; ( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null ; cat /tmp/umkmfvxctxjg.tmp ; rm /tmp/umkmfvxctxjg.tmp ' +[2023-07-30 13:36:42] DEBUG - Rex::Interface::Exec::OpenSSH/executing: sudo -p '' -S sh -c 'LC_ALL=C ; export LC_ALL; PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/pkg/bin:/usr/pkg/sbin; export PATH; ( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null ; cat /tmp/umkmfvxctxjg.tmp ; rm /tmp/umkmfvxctxjg.tmp ' +[2023-07-30 13:36:42] DEBUG - ========= ERR ============ +[2023-07-30 13:36:42] DEBUG - sh: >&/dev/null : illegal file descriptor name +cat: /tmp/umkmfvxctxjg.tmp: No such file or directory +rm: /tmp/umkmfvxctxjg.tmp: No such file or directory + +[2023-07-30 13:36:42] DEBUG - ========= ERR ============ +[2023-07-30 13:36:42] DEBUG - Error executing `( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null ; cat /tmp/umkmfvxctxjg.tmp ; rm /tmp/umkmfvxctxjg.tmp`: +[2023-07-30 13:36:42] DEBUG - STDOUT: +[2023-07-30 13:36:42] DEBUG - +[2023-07-30 13:36:42] DEBUG - STDERR: +[2023-07-30 13:36:42] DEBUG - sh: >&/dev/null : illegal file descriptor name +cat: /tmp/umkmfvxctxjg.tmp: No such file or directory +rm: /tmp/umkmfvxctxjg.tmp: No such file or directory +[2023-07-30 13:36:42] ERROR - Error executing task: +[2023-07-30 13:36:42] ERROR - Error during `i_run` at /usr/share/perl5/vendor_perl/Rex/Helper/Run.pm line 120, <ARGV> line 8. + Rex::Helper::Run::i_run("( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null"...) called at /usr/share/perl5/vendor_perl/Rex/Cron/FreeBSD.pm line 38 + Rex::Cron::FreeBSD::read_user_cron(Rex::Cron::FreeBSD=HASH(0x5603c05187c0), "_gogios") called at /usr/share/perl5/vendor_perl/Rex/Commands/Cron.pm line 224 + Rex::Commands::Cron::cron("add", "_gogios", HASH(0x5603bfff6048)) called at /loader/0x5603bedbd710/__Rexfile__.pm line 15 + Rex::CLI::__ANON__(HASH(0x5603bfa6efe0), ARRAY(0x5603bfa6f130)) called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 59 + eval {...} called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 41 + Rex::Interface::Executor::Default::exec(Rex::Interface::Executor::Default=HASH(0x5603bfa81380), HASH(0x5603bfa6efe0), ARRAY(0x5603bfa6f130)) called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 880 + eval {...} called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 884 + Rex::Task::run(Rex::Task=HASH(0x5603bfa81080), Rex::Group::Entry::Server=HASH(0x5603bfa6f460), "in_transaction", 0, "params", undef, "args", undef) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 340 + eval {...} called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 334 + Rex::TaskList::Base::__ANON__(Rex::Fork::Task=HASH(0x5603bfa6f430)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Task.pm line 32 + Rex::Fork::Task::start(Rex::Fork::Task=HASH(0x5603bfa6f430)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Manager.pm line 35 + Rex::Fork::Manager::add(Rex::Fork::Manager=HASH(0x5603befb5748), CODE(0x5603be7912d0)) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 316 + Rex::TaskList::Base::run(Rex::TaskList::Base=HASH(0x5603bfa80e10), Rex::Task=HASH(0x5603bfa813e0)) called at /usr/share/perl5/vendor_perl/Rex/TaskList.pm line 61 + Rex::TaskList::run("Rex::TaskList", Rex::Task=HASH(0x5603bfa813e0)) called at /usr/share/perl5/vendor_perl/Rex/RunList.pm line 67 + Rex::RunList::run_tasks(Rex::RunList=HASH(0x5603bf0cad90)) called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 + eval {...} called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 + Rex::CLI::__run__(Rex::CLI=HASH(0x5603be6594e8)) called at /usr/bin/rex line 22 + +[2023-07-30 13:36:42] DEBUG - Destroying all cached os information +[2023-07-30 13:36:43] DEBUG - Need to reinitialize connections. +[2023-07-30 13:36:43] DEBUG - Returning existing distribution class of type: Rex::TaskList::Base +[2023-07-30 13:36:43] ERROR - 1 out of 1 task(s) failed: +[2023-07-30 13:36:43] ERROR - openbsd_cron_test failed on blowfish.buetow.org:2 +[2023-07-30 13:36:43] ERROR - Error during `i_run` at /usr/share/perl5/vendor_perl/Rex/Helper/Run.pm line 120, <ARGV> line 8. +[2023-07-30 13:36:43] ERROR - Rex::Helper::Run::i_run("( crontab -l -u _gogios >/tmp/umkmfvxctxjg.tmp ) >& /dev/null"...) called at /usr/share/perl5/vendor_perl/Rex/Cron/FreeBSD.pm line 38 +[2023-07-30 13:36:43] ERROR - Rex::Cron::FreeBSD::read_user_cron(Rex::Cron::FreeBSD=HASH(0x5603c05187c0), "_gogios") called at /usr/share/perl5/vendor_perl/Rex/Commands/Cron.pm line 224 +[2023-07-30 13:36:43] ERROR - Rex::Commands::Cron::cron("add", "_gogios", HASH(0x5603bfff6048)) called at /loader/0x5603bedbd710/__Rexfile__.pm line 15 +[2023-07-30 13:36:43] ERROR - Rex::CLI::__ANON__(HASH(0x5603bfa6efe0), ARRAY(0x5603bfa6f130)) called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 59 +[2023-07-30 13:36:43] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 41 +[2023-07-30 13:36:43] ERROR - Rex::Interface::Executor::Default::exec(Rex::Interface::Executor::Default=HASH(0x5603bfa81380), HASH(0x5603bfa6efe0), ARRAY(0x5603bfa6f130)) called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 880 +[2023-07-30 13:36:43] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 884 +[2023-07-30 13:36:43] ERROR - Rex::Task::run(Rex::Task=HASH(0x5603bfa81080), Rex::Group::Entry::Server=HASH(0x5603bfa6f460), "in_transaction", 0, "params", undef, "args", undef) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 340 +[2023-07-30 13:36:43] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 334 +[2023-07-30 13:36:43] ERROR - Rex::TaskList::Base::__ANON__(Rex::Fork::Task=HASH(0x5603bfa6f430)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Task.pm line 32 +[2023-07-30 13:36:43] ERROR - Rex::Fork::Task::start(Rex::Fork::Task=HASH(0x5603bfa6f430)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Manager.pm line 35 +[2023-07-30 13:36:43] ERROR - Rex::Fork::Manager::add(Rex::Fork::Manager=HASH(0x5603befb5748), CODE(0x5603be7912d0)) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 316 +[2023-07-30 13:36:43] ERROR - Rex::TaskList::Base::run(Rex::TaskList::Base=HASH(0x5603bfa80e10), Rex::Task=HASH(0x5603bfa813e0)) called at /usr/share/perl5/vendor_perl/Rex/TaskList.pm line 61 +[2023-07-30 13:36:43] ERROR - Rex::TaskList::run("Rex::TaskList", Rex::Task=HASH(0x5603bfa813e0)) called at /usr/share/perl5/vendor_perl/Rex/RunList.pm line 67 +[2023-07-30 13:36:43] ERROR - Rex::RunList::run_tasks(Rex::RunList=HASH(0x5603bf0cad90)) called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 +[2023-07-30 13:36:43] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 +[2023-07-30 13:36:43] ERROR - Rex::CLI::__run__(Rex::CLI=HASH(0x5603be6594e8)) called at /usr/bin/rex line 22 +[2023-07-30 13:36:43] DEBUG - Removing lockfile +[2023-07-30 13:36:43] DEBUG - Returning existing distribution class of type: Rex::TaskList::Base diff --git a/gemfeed/examples/conf/playground/openbsd_cron_test.txt b/gemfeed/examples/conf/playground/openbsd_cron_test.txt new file mode 100644 index 00000000..fdeca282 --- /dev/null +++ b/gemfeed/examples/conf/playground/openbsd_cron_test.txt @@ -0,0 +1,42 @@ +[paul@earth]~/git/rexfiles/testing% rex -m openbsd_cron_test &> openbsd_cron_test.txt +[2023-07-30 13:36:19] INFO - Running task openbsd_cron_test on blowfish.buetow.org:2 +[2023-07-30 13:36:27] ERROR - Error executing task: +[2023-07-30 13:36:27] ERROR - Error during `i_run` at /usr/share/perl5/vendor_perl/Rex/Helper/Run.pm line 120, <ARGV> line 8. + Rex::Helper::Run::i_run("( crontab -l -u _gogios >/tmp/johvumpjmtuo.tmp ) >& /dev/null"...) called at /usr/share/perl5/vendor_perl/Rex/Cron/FreeBSD.pm line 38 + Rex::Cron::FreeBSD::read_user_cron(Rex::Cron::FreeBSD=HASH(0x55f31eb606b0), "_gogios") called at /usr/share/perl5/vendor_perl/Rex/Commands/Cron.pm line 224 + Rex::Commands::Cron::cron("add", "_gogios", HASH(0x55f31e7a4198)) called at /loader/0x55f31d3e79c8/__Rexfile__.pm line 15 + Rex::CLI::__ANON__(HASH(0x55f31e795d60), ARRAY(0x55f31e7889c0)) called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 59 + eval {...} called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 41 + Rex::Interface::Executor::Default::exec(Rex::Interface::Executor::Default=HASH(0x55f31e0731c0), HASH(0x55f31e795d60), ARRAY(0x55f31e7889c0)) called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 880 + eval {...} called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 884 + Rex::Task::run(Rex::Task=HASH(0x55f31e795bf8), Rex::Group::Entry::Server=HASH(0x55f31ccb1010), "in_transaction", 0, "params", undef, "args", undef) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 340 + eval {...} called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 334 + Rex::TaskList::Base::__ANON__(Rex::Fork::Task=HASH(0x55f31db4b820)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Task.pm line 32 + Rex::Fork::Task::start(Rex::Fork::Task=HASH(0x55f31db4b820)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Manager.pm line 35 + Rex::Fork::Manager::add(Rex::Fork::Manager=HASH(0x55f31ccbf6c8), CODE(0x55f31ccbf6f8)) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 316 + Rex::TaskList::Base::run(Rex::TaskList::Base=HASH(0x55f31e072ed8), Rex::Task=HASH(0x55f31e72a460)) called at /usr/share/perl5/vendor_perl/Rex/TaskList.pm line 61 + Rex::TaskList::run("Rex::TaskList", Rex::Task=HASH(0x55f31e72a460)) called at /usr/share/perl5/vendor_perl/Rex/RunList.pm line 67 + Rex::RunList::run_tasks(Rex::RunList=HASH(0x55f31d6f6308)) called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 + eval {...} called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 + Rex::CLI::__run__(Rex::CLI=HASH(0x55f31cc844e8)) called at /usr/bin/rex line 22 + +[2023-07-30 13:36:27] ERROR - 1 out of 1 task(s) failed: +[2023-07-30 13:36:27] ERROR - openbsd_cron_test failed on blowfish.buetow.org:2 +[2023-07-30 13:36:27] ERROR - Error during `i_run` at /usr/share/perl5/vendor_perl/Rex/Helper/Run.pm line 120, <ARGV> line 8. +[2023-07-30 13:36:27] ERROR - Rex::Helper::Run::i_run("( crontab -l -u _gogios >/tmp/johvumpjmtuo.tmp ) >& /dev/null"...) called at /usr/share/perl5/vendor_perl/Rex/Cron/FreeBSD.pm line 38 +[2023-07-30 13:36:27] ERROR - Rex::Cron::FreeBSD::read_user_cron(Rex::Cron::FreeBSD=HASH(0x55f31eb606b0), "_gogios") called at /usr/share/perl5/vendor_perl/Rex/Commands/Cron.pm line 224 +[2023-07-30 13:36:27] ERROR - Rex::Commands::Cron::cron("add", "_gogios", HASH(0x55f31e7a4198)) called at /loader/0x55f31d3e79c8/__Rexfile__.pm line 15 +[2023-07-30 13:36:27] ERROR - Rex::CLI::__ANON__(HASH(0x55f31e795d60), ARRAY(0x55f31e7889c0)) called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 59 +[2023-07-30 13:36:27] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Interface/Executor/Default.pm line 41 +[2023-07-30 13:36:27] ERROR - Rex::Interface::Executor::Default::exec(Rex::Interface::Executor::Default=HASH(0x55f31e0731c0), HASH(0x55f31e795d60), ARRAY(0x55f31e7889c0)) called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 880 +[2023-07-30 13:36:27] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/Task.pm line 884 +[2023-07-30 13:36:27] ERROR - Rex::Task::run(Rex::Task=HASH(0x55f31e795bf8), Rex::Group::Entry::Server=HASH(0x55f31ccb1010), "in_transaction", 0, "params", undef, "args", undef) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 340 +[2023-07-30 13:36:27] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 334 +[2023-07-30 13:36:27] ERROR - Rex::TaskList::Base::__ANON__(Rex::Fork::Task=HASH(0x55f31db4b820)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Task.pm line 32 +[2023-07-30 13:36:27] ERROR - Rex::Fork::Task::start(Rex::Fork::Task=HASH(0x55f31db4b820)) called at /usr/share/perl5/vendor_perl/Rex/Fork/Manager.pm line 35 +[2023-07-30 13:36:27] ERROR - Rex::Fork::Manager::add(Rex::Fork::Manager=HASH(0x55f31ccbf6c8), CODE(0x55f31ccbf6f8)) called at /usr/share/perl5/vendor_perl/Rex/TaskList/Base.pm line 316 +[2023-07-30 13:36:27] ERROR - Rex::TaskList::Base::run(Rex::TaskList::Base=HASH(0x55f31e072ed8), Rex::Task=HASH(0x55f31e72a460)) called at /usr/share/perl5/vendor_perl/Rex/TaskList.pm line 61 +[2023-07-30 13:36:27] ERROR - Rex::TaskList::run("Rex::TaskList", Rex::Task=HASH(0x55f31e72a460)) called at /usr/share/perl5/vendor_perl/Rex/RunList.pm line 67 +[2023-07-30 13:36:27] ERROR - Rex::RunList::run_tasks(Rex::RunList=HASH(0x55f31d6f6308)) called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 +[2023-07-30 13:36:27] ERROR - eval {...} called at /usr/share/perl5/vendor_perl/Rex/CLI.pm line 374 +[2023-07-30 13:36:27] ERROR - Rex::CLI::__run__(Rex::CLI=HASH(0x55f31cc844e8)) called at /usr/bin/rex line 22 diff --git a/gemfeed/index.html b/gemfeed/index.html index 3e2afd23..bd0c2c77 100644 --- a/gemfeed/index.html +++ b/gemfeed/index.html @@ -15,6 +15,7 @@ <br /> <h2 style='display: inline' id='to-be-in-the-zone'>To be in the .zone!</h2><br /> <br /> +<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 - f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./2025-09-14-bash-golf-part-4.html'>2025-09-14 - Bash Golf Part 4</a><br /> <a class='textlink' href='./2025-08-15-random-weird-things-iii.html'>2025-08-15 - Random Weird Things - Part Ⅲ</a><br /> <a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 - Local LLM for Coding with Ollama on macOS</a><br /> @@ -32,7 +33,7 @@ <a class='textlink' href='./2025-01-15-working-with-an-sre-interview.html'>2025-01-15 - Working with an SRE Interview</a><br /> <a class='textlink' href='./2025-01-01-posts-from-october-to-december-2024.html'>2025-01-01 - Posts from October to December 2024</a><br /> <a class='textlink' href='./2024-12-15-random-helix-themes.html'>2024-12-15 - Random Helix Themes</a><br /> -<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 - Deciding on the hardware</a><br /> +<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 - f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 - f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> <a class='textlink' href='./2024-10-24-staff-engineer-book-notes.html'>2024-10-24 - 'Staff Engineer' book notes</a><br /> <a class='textlink' href='./2024-10-02-gemtexter-3.0.0-lets-gemtext-again-4.html'>2024-10-02 - Gemtexter 3.0.0 - Let's Gemtext again⁴</a><br /> diff --git a/gemfeed/stunnel-nfs-quick-reference.txt b/gemfeed/stunnel-nfs-quick-reference.txt deleted file mode 100644 index ca7f577a..00000000 --- a/gemfeed/stunnel-nfs-quick-reference.txt +++ /dev/null @@ -1,78 +0,0 @@ -STUNNEL + NFS QUICK REFERENCE FOR r1 AND r2 -=========================================== - -COMPLETE SETUP (run as root on r1 and r2): ------------------------------------------- - -# 1. Install stunnel -dnf install -y stunnel - -# 2. Copy certificate from f0 (run on f0) -scp /usr/local/etc/stunnel/stunnel.pem root@r1:/etc/stunnel/ -scp /usr/local/etc/stunnel/stunnel.pem root@r2:/etc/stunnel/ - -# 3. Create stunnel config on r1/r2 -mkdir -p /etc/stunnel -cat > /etc/stunnel/stunnel.conf <<'EOF' -cert = /etc/stunnel/stunnel.pem -client = yes - -[nfs-ha] -accept = 127.0.0.1:2323 -connect = 192.168.1.138:2323 -EOF - -# 4. Create systemd service -cat > /etc/systemd/system/stunnel.service <<'EOF' -[Unit] -Description=SSL tunnel for network daemons -After=network.target - -[Service] -Type=forking -ExecStart=/usr/bin/stunnel /etc/stunnel/stunnel.conf -ExecStop=/usr/bin/killall stunnel -RemainAfterExit=yes - -[Install] -WantedBy=multi-user.target -EOF - -# 5. Enable and start stunnel -systemctl daemon-reload -systemctl enable --now stunnel - -# 6. Create mount point -mkdir -p /data/nfs/k3svolumes - -# 7. Test mount -mount -t nfs4 -o port=2323 127.0.0.1:/data/nfs/k3svolumes /data/nfs/k3svolumes - -# 8. Verify mount works -ls -la /data/nfs/k3svolumes/ - -# 9. Add to fstab for persistence -echo "127.0.0.1:/data/nfs/k3svolumes /data/nfs/k3svolumes nfs4 port=2323,_netdev 0 0" >> /etc/fstab - -# 10. Test fstab mount -umount /data/nfs/k3svolumes -mount /data/nfs/k3svolumes - -VERIFICATION COMMANDS: ----------------------- -systemctl status stunnel -mount | grep k3svolumes -df -h /data/nfs/k3svolumes -echo "test" > /data/nfs/k3svolumes/test-$(hostname).txt - -TROUBLESHOOTING: ----------------- -# Check stunnel logs -journalctl -u stunnel -f - -# Test connectivity -telnet 127.0.0.1 2323 - -# Restart services -systemctl restart stunnel -umount /data/nfs/k3svolumes && mount /data/nfs/k3svolumes
\ No newline at end of file @@ -13,7 +13,7 @@ </p> <h1 style='display: inline' id='hello'>Hello!</h1><br /> <br /> -<span class='quote'>This site was generated at 2025-09-29T09:38:00+03:00 by <span class='inlinecode'>Gemtexter</span></span><br /> +<span class='quote'>This site was generated at 2025-10-02T11:27:20+03:00 by <span class='inlinecode'>Gemtexter</span></span><br /> <br /> <span>Welcome to the foo.zone!</span><br /> <br /> @@ -51,6 +51,7 @@ <br /> <h3 style='display: inline' id='posts'>Posts</h3><br /> <br /> +<a class='textlink' href='./gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 - f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br /> <a class='textlink' href='./gemfeed/2025-09-14-bash-golf-part-4.html'>2025-09-14 - Bash Golf Part 4</a><br /> <a class='textlink' href='./gemfeed/2025-08-15-random-weird-things-iii.html'>2025-08-15 - Random Weird Things - Part Ⅲ</a><br /> <a class='textlink' href='./gemfeed/2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 - Local LLM for Coding with Ollama on macOS</a><br /> @@ -68,7 +69,7 @@ <a class='textlink' href='./gemfeed/2025-01-15-working-with-an-sre-interview.html'>2025-01-15 - Working with an SRE Interview</a><br /> <a class='textlink' href='./gemfeed/2025-01-01-posts-from-october-to-december-2024.html'>2025-01-01 - Posts from October to December 2024</a><br /> <a class='textlink' href='./gemfeed/2024-12-15-random-helix-themes.html'>2024-12-15 - Random Helix Themes</a><br /> -<a class='textlink' href='./gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 - Deciding on the hardware</a><br /> +<a class='textlink' href='./gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 - f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 - f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> <a class='textlink' href='./gemfeed/2024-10-24-staff-engineer-book-notes.html'>2024-10-24 - 'Staff Engineer' book notes</a><br /> <a class='textlink' href='./gemfeed/2024-10-02-gemtexter-3.0.0-lets-gemtext-again-4.html'>2024-10-02 - Gemtexter 3.0.0 - Let's Gemtext again⁴</a><br /> diff --git a/uptime-stats.html b/uptime-stats.html index c94949df..0bfa741b 100644 --- a/uptime-stats.html +++ b/uptime-stats.html @@ -13,7 +13,7 @@ </p> <h1 style='display: inline' id='my-machine-uptime-stats'>My machine uptime stats</h1><br /> <br /> -<span class='quote'>This site was last updated at 2025-09-29T09:38:00+03:00</span><br /> +<span class='quote'>This site was last updated at 2025-10-02T11:27:20+03:00</span><br /> <br /> <span>The following stats were collected via <span class='inlinecode'>uptimed</span> on all of my personal computers over many years and the output was generated by <span class='inlinecode'>guprecords</span>, the global uptime records stats analyser of mine.</span><br /> <br /> |
