summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPaul Buetow <paul@buetow.org>2025-08-05 09:56:24 +0300
committerPaul Buetow <paul@buetow.org>2025-08-05 09:56:24 +0300
commitf91c600034713a1a3ed4e9a216d1f546606b3dd6 (patch)
treec9e1d737861177e516411cd357b9d3f9c17d20e6
parentffbaa8ac965a9df72e3aa3e67ed4ffe3f307a002 (diff)
Update content for html
-rw-r--r--about/resources.html196
-rw-r--r--gemfeed/2025-08-05-local-coding-llm-with-ollama.html14
-rw-r--r--gemfeed/atom.xml16
-rw-r--r--index.html2
-rw-r--r--uptime-stats.html24
5 files changed, 130 insertions, 122 deletions
diff --git a/about/resources.html b/about/resources.html
index d83a3588..0e8d15d1 100644
--- a/about/resources.html
+++ b/about/resources.html
@@ -50,107 +50,107 @@
<span>In random order:</span><br />
<br />
<ul>
-<li>The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton</li>
-<li>Concurrency in Go; Katherine Cox-Buday; O&#39;Reilly</li>
-<li>Kubernetes Cookbook; Sameer Naik, Sébastien Goasguen, Jonathan Michaux; O&#39;Reilly</li>
-<li>The DevOps Handbook; Gene Kim, Jez Humble, Patrick Debois, John Willis; Audible</li>
-<li>Distributed Systems: Principles and Paradigms; Andrew S. Tanenbaum; Pearson</li>
<li>Effective awk programming; Arnold Robbins; O&#39;Reilly</li>
-<li>Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O&#39;Reilly</li>
-<li>Clusterbau mit Linux-HA; Michael Schwartzkopff; O&#39;Reilly</li>
-<li>21st Century C: C Tips from the New School; Ben Klemens; O&#39;Reilly</li>
-<li>Data Science at the Command Line; Jeroen Janssens; O&#39;Reilly</li>
-<li>Developing Games in Java; David Brackeen and others...; New Riders</li>
-<li>Higher Order Perl; Mark Dominus; Morgan Kaufmann</li>
-<li>Ultimate Go Notebook; Bill Kennedy</li>
-<li>The Kubernetes Book; Nigel Poulton; Unabridged Audiobook</li>
-<li>DNS and BIND; Cricket Liu; O&#39;Reilly</li>
-<li>Java ist auch eine Insel; Christian Ullenboom; </li>
-<li>Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner</li>
-<li>97 things every SRE should know; Emil Stolarsky, Jaime Woo; O&#39;Reilly</li>
-<li>C++ Programming Language; Bjarne Stroustrup;</li>
-<li>Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt </li>
<li>The Practise of System and Network Administration; Thomas A. Limoncelli, Christina J. Hogan, Strata R. Chalup; Addison-Wesley Professional Pro Git; Scott Chacon, Ben Straub; Apress</li>
-<li>Funktionale Programmierung; Peter Pepper; Springer</li>
-<li>100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications</li>
-<li>Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers</li>
+<li>Perl New Features; Joshua McAdams, brian d foy; Perl School</li>
+<li>The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional</li>
<li>Learn You Some Erlang for Great Good; Fred Herbert; No Starch Press</li>
+<li>Ultimate Go Notebook; Bill Kennedy</li>
+<li>Raku Fundamentals; Moritz Lenz; Apress</li>
<li>Effective Java; Joshua Bloch; Addison-Wesley Professional</li>
+<li>Distributed Systems: Principles and Paradigms; Andrew S. Tanenbaum; Pearson</li>
+<li>The KCNA (Kubernetes and Cloud Native Associate) Book; Nigel Poulton</li>
+<li>Object-Oriented Programming with ANSI-C; Axel-Tobias Schreiner</li>
+<li>Data Science at the Command Line; Jeroen Janssens; O&#39;Reilly</li>
+<li>Developing Games in Java; David Brackeen and others...; New Riders</li>
+<li>Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications</li>
+<li>C++ Programming Language; Bjarne Stroustrup;</li>
+<li>Polished Ruby Programming; Jeremy Evans; Packt Publishing</li>
+<li>Systemprogrammierung in Go; Frank Müller; dpunkt</li>
+<li>21st Century C: C Tips from the New School; Ben Klemens; O&#39;Reilly</li>
+<li>DNS and BIND; Cricket Liu; O&#39;Reilly</li>
+<li>Pro Puppet; James Turnbull, Jeffrey McCune; Apress</li>
<li>Programming Ruby 3.3 (5th Edition); Noel Rappin, with Dave Thomas; The Pragmatic Bookshelf</li>
-<li>Modern Perl; Chromatic ; Onyx Neon Press</li>
-<li>The Go Programming Language; Alan A. A. Donovan; Addison-Wesley Professional</li>
+<li>Raku Recipes; J.J. Merelo; Apress</li>
<li>The Pragmatic Programmer; David Thomas; Addison-Wesley</li>
-<li>Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press</li>
+<li>Concurrency in Go; Katherine Cox-Buday; O&#39;Reilly</li>
+<li>Higher Order Perl; Mark Dominus; Morgan Kaufmann</li>
<li>Leanring eBPF; Liz Rice; O&#39;Reilly</li>
-<li>Perl New Features; Joshua McAdams, brian d foy; Perl School</li>
-<li>Systemprogrammierung in Go; Frank Müller; dpunkt</li>
<li>DevOps And Site Reliability Engineering Handbook; Stephen Fleming; Audible</li>
-<li>The Docker Book; James Turnbull; Kindle</li>
-<li>Site Reliability Engineering; How Google runs production systems; O&#39;Reilly</li>
-<li>Raku Recipes; J.J. Merelo; Apress</li>
-<li>Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall &amp; Jon Orwant; O&#39;Reilly</li>
<li>Tmux 2: Productive Mouse-free Development; Brain P. Hogan; The Pragmatic Programmers </li>
-<li>Pro Puppet; James Turnbull, Jeffrey McCune; Apress</li>
+<li>Go Brain Teasers - Exercise Your Mind; Miki Tebeka; The Pragmatic Programmers</li>
+<li>Programming Perl aka "The Camel Book"; Tom Christiansen, brian d foy, Larry Wall &amp; Jon Orwant; O&#39;Reilly</li>
+<li>Think Raku (aka Think Perl 6); Laurent Rosenfeld, Allen B. Downey; O&#39;Reilly</li>
+<li>97 things every SRE should know; Emil Stolarsky, Jaime Woo; O&#39;Reilly</li>
+<li>Modern Perl; Chromatic ; Onyx Neon Press</li>
+<li>Funktionale Programmierung; Peter Pepper; Springer</li>
+<li>100 Go Mistakes and How to Avoid Them; Teiva Harsanyi; Manning Publications</li>
+<li>Kubernetes Cookbook; Sameer Naik, Sébastien Goasguen, Jonathan Michaux; O&#39;Reilly</li>
+<li>Java ist auch eine Insel; Christian Ullenboom; </li>
+<li>Clusterbau mit Linux-HA; Michael Schwartzkopff; O&#39;Reilly</li>
+<li>Learn You a Haskell for Great Good!; Miran Lipovaca; No Starch Press</li>
+<li>The Docker Book; James Turnbull; Kindle</li>
<li>Systems Performance Tuning; Gian-Paolo D. Musumeci and others...; O&#39;Reilly</li>
+<li>Site Reliability Engineering; How Google runs production systems; O&#39;Reilly</li>
+<li>The DevOps Handbook; Gene Kim, Jez Humble, Patrick Debois, John Willis; Audible</li>
+<li>Hands-on Infrastructure Monitoring with Prometheus; Joel Bastos, Pedro Araujo; Packt </li>
+<li>The Kubernetes Book; Nigel Poulton; Unabridged Audiobook</li>
<li>Terraform Cookbook; Mikael Krief; Packt Publishing</li>
-<li>Polished Ruby Programming; Jeremy Evans; Packt Publishing</li>
-<li>Amazon Web Services in Action; Michael Wittig and Andreas Wittig; Manning Publications</li>
-<li>Raku Fundamentals; Moritz Lenz; Apress</li>
</ul><br />
<h2 style='display: inline' id='technical-references'>Technical references</h2><br />
<br />
<span>I didn&#39;t read them from the beginning to the end, but I am using them to look up things. The books are in random order:</span><br />
<br />
<ul>
-<li>Implementing Service Level Objectives; Alex Hidalgo; O&#39;Reilly</li>
-<li>BPF Performance Tools - Linux System and Application Observability, Brendan Gregg; Addison Wesley</li>
-<li>The Linux Programming Interface; Michael Kerrisk; No Starch Press </li>
<li>Go: Design Patterns for Real-World Projects; Mat Ryer; Packt</li>
-<li>Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O&#39;Reilly</li>
-<li>Algorithms; Robert Sedgewick, Kevin Wayne; Addison Wesley</li>
<li>Groovy Kurz &amp; Gut; Joerg Staudemeier; O&#39;Reilly</li>
<li>Relayd and Httpd Mastery; Michael W Lucas</li>
+<li>Understanding the Linux Kernel; Daniel P. Bovet, Marco Cesati; O&#39;Reilly</li>
+<li>BPF Performance Tools - Linux System and Application Observability, Brendan Gregg; Addison Wesley</li>
+<li>Algorithms; Robert Sedgewick, Kevin Wayne; Addison Wesley</li>
+<li>The Linux Programming Interface; Michael Kerrisk; No Starch Press </li>
+<li>Implementing Service Level Objectives; Alex Hidalgo; O&#39;Reilly</li>
</ul><br />
<h2 style='display: inline' id='self-development-and-soft-skills-books'>Self-development and soft-skills books</h2><br />
<br />
<span>In random order:</span><br />
<br />
<ul>
-<li>Getting Things Done; David Allen</li>
-<li>Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)</li>
-<li>The Complete Software Developer&#39;s Career Guide; John Sonmez; Unabridged Audiobook</li>
-<li>101 Essays that change the way you think; Brianna Wiest; Audiobook</li>
-<li>The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select</li>
-<li>Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press</li>
-<li>Stop starting, start finishing; Arne Roock; Lean-Kanban University </li>
-<li>Ultralearning; Scott Young; Thorsons</li>
-<li>Ultralearning; Anna Laurent; Self-published via Amazon</li>
-<li>The Good Enough Job; Simone Stolzoff; Ebury Edge</li>
-<li>Deep Work; Cal Newport; Piatkus</li>
-<li>The Joy of Missing Out; Christina Crook; New Society Publishers</li>
<li>Psycho-Cybernetics; Maxwell Maltz; Perigee Books</li>
-<li>The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books</li>
+<li>Ultralearning; Anna Laurent; Self-published via Amazon</li>
+<li>Slow Productivity; Cal Newport; Penguin Random House</li>
+<li>Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing</li>
<li>The Obstacle Is The Way; Ryan Holiday; Profile Books Ltd</li>
-<li>Time Management for System Administrators; Thomas A. Limoncelli; O&#39;Reilly</li>
-<li>Digital Minimalism; Cal Newport; Portofolio Penguin</li>
-<li>Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion</li>
-<li>The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon &amp; Schuster UK</li>
-<li>Staff Engineer: Leadership beyond the management track; Will Larson; Audiobook</li>
-<li>Never Split the Difference; Chris Voss, Tahl Raz; Random House Business</li>
+<li>Soft Skills; John Sommez; Manning Publications</li>
+<li>101 Essays that change the way you think; Brianna Wiest; Audiobook</li>
<li>The Off Switch; Mark Cropley; Virgin Books (RE-READ 1ST TIME)</li>
-<li>Eat That Frog; Brian Tracy</li>
-<li>Atomic Habits; James Clear; Random House Business</li>
-<li>Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne</li>
+<li>Who Moved My Cheese?; Dr. Spencer Johnson; Vermilion</li>
<li>Meditation for Mortals, Oliver Burkeman, Audiobook</li>
+<li>Getting Things Done; David Allen</li>
+<li>The Daily Stoic; Ryan Holiday, Stephen Hanselman; Profile Books</li>
+<li>Staff Engineer: Leadership beyond the management track; Will Larson; Audiobook</li>
+<li>The 7 Habits Of Highly Effective People; Stephen R. Covey; Simon &amp; Schuster UK</li>
+<li>Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press</li>
+<li>The Joy of Missing Out; Christina Crook; New Society Publishers</li>
<li>Coders at Work - Reflections on the craft of programming, Peter Seibel and Mitchell Dorian et al., Audiobook</li>
<li>So Good They Can&#39;t Ignore You; Cal Newport; Business Plus</li>
-<li>Eat That Frog!; Brian Tracy; Hodder Paperbacks</li>
-<li>Soft Skills; John Sommez; Manning Publications</li>
-<li>Slow Productivity; Cal Newport; Penguin Random House</li>
-<li>The Bullet Journal Method; Ryder Carroll; Fourth Estate</li>
-<li>Buddah and Einstein walk into a Bar; Guy Joseph Ale, Claire Bloom; Blackstone Publishing</li>
+<li>Stop starting, start finishing; Arne Roock; Lean-Kanban University </li>
+<li>Solve for Happy; Mo Gawdat (RE-READ 1ST TIME)</li>
+<li>Deep Work; Cal Newport; Piatkus</li>
+<li>Digital Minimalism; Cal Newport; Portofolio Penguin</li>
+<li>Time Management for System Administrators; Thomas A. Limoncelli; O&#39;Reilly</li>
+<li>Eat That Frog; Brian Tracy</li>
<li>The Power of Now; Eckhard Tolle; Yellow Kite</li>
+<li>Never Split the Difference; Chris Voss, Tahl Raz; Random House Business</li>
+<li>The Bullet Journal Method; Ryder Carroll; Fourth Estate</li>
<li>Influence without Authority; A. Cohen, D. Bradford; Wiley</li>
+<li>The Phoenix Project - A Novel About IT, DevOps, and Helping your Business Win; Gene Kim and Kevin Behr; Trade Select</li>
+<li>Eat That Frog!; Brian Tracy; Hodder Paperbacks</li>
+<li>Atomic Habits; James Clear; Random House Business</li>
+<li>The Good Enough Job; Simone Stolzoff; Ebury Edge</li>
+<li>The Complete Software Developer&#39;s Career Guide; John Sonmez; Unabridged Audiobook</li>
+<li>Search Inside Yourself - The Unexpected path to Achieving Success, Happiness (and World Peace); Chade-Meng Tan, Daniel Goleman, Jon Kabat-Zinn; HarperOne</li>
+<li>Ultralearning; Scott Young; Thorsons</li>
</ul><br />
<a class='textlink' href='../notes/index.html'>Here are notes of mine for some of the books</a><br />
<br />
@@ -159,22 +159,22 @@
<span>Some of these were in-person with exams; others were online learning lectures only. In random order:</span><br />
<br />
<ul>
+<li>Protocol buffers; O&#39;Reilly Online</li>
+<li>Linux Security and Isolation APIs Training; Michael Kerrisk; 3-day on-site training</li>
+<li>Scripting Vim; Damian Conway; O&#39;Reilly Online</li>
+<li>MySQL Deep Dive Workshop; 2-day on-site training</li>
+<li>Structure and Interpretation of Computer Programs; Harold Abelson and more...; </li>
<li>F5 Loadbalancers Training; 2-day on-site training; F5, Inc. </li>
-<li>Red Hat Certified System Administrator; Course + certification (Although I had the option, I decided not to take the next course as it is more effective to self learn what I need)</li>
<li>Developing IaC with Terraform (with Live Lessons); O&#39;Reilly Online</li>
+<li>The Well-Grounded Rubyist Video Edition; David. A. Black; O&#39;Reilly Online</li>
<li>Cloud Operations on AWS - Learn how to configure, deploy, maintain, and troubleshoot your AWS environments; 3-day online live training with labs; Amazon</li>
-<li>Structure and Interpretation of Computer Programs; Harold Abelson and more...; </li>
+<li>Red Hat Certified System Administrator; Course + certification (Although I had the option, I decided not to take the next course as it is more effective to self learn what I need)</li>
+<li>Apache Tomcat Best Practises; 3-day on-site training</li>
<li>Functional programming lecture; Remote University of Hagen</li>
-<li>Scripting Vim; Damian Conway; O&#39;Reilly Online</li>
-<li>Protocol buffers; O&#39;Reilly Online</li>
<li>Ultimate Go Programming; Bill Kennedy; O&#39;Reilly Online</li>
-<li>The Well-Grounded Rubyist Video Edition; David. A. Black; O&#39;Reilly Online</li>
-<li>MySQL Deep Dive Workshop; 2-day on-site training</li>
+<li>AWS Immersion Day; Amazon; 1-day interactive online training </li>
<li>The Ultimate Kubernetes Bootcamp; School of Devops; O&#39;Reilly Online</li>
<li>Algorithms Video Lectures; Robert Sedgewick; O&#39;Reilly Online</li>
-<li>AWS Immersion Day; Amazon; 1-day interactive online training </li>
-<li>Apache Tomcat Best Practises; 3-day on-site training</li>
-<li>Linux Security and Isolation APIs Training; Michael Kerrisk; 3-day on-site training</li>
</ul><br />
<h2 style='display: inline' id='technical-guides'>Technical guides</h2><br />
<br />
@@ -182,8 +182,8 @@
<br />
<ul>
<li>How CPUs work at https://cpu.land</li>
-<li>Advanced Bash-Scripting Guide </li>
<li>Raku Guide at https://raku.guide </li>
+<li>Advanced Bash-Scripting Guide </li>
</ul><br />
<h2 style='display: inline' id='podcasts'>Podcasts</h2><br />
<br />
@@ -192,20 +192,20 @@
<span>In random order:</span><br />
<br />
<ul>
+<li>Fork Around And Find Out</li>
+<li>The Changelog Podcast(s)</li>
+<li>BSD Now [BSD]</li>
+<li>Fallthrough [Golang]</li>
<li>Modern Mentor</li>
-<li>Maintainable</li>
-<li>Hidden Brain</li>
-<li>Backend Banter</li>
<li>Pratical AI</li>
-<li>BSD Now [BSD]</li>
-<li>Deep Questions with Cal Newport</li>
-<li>Dev Interrupted</li>
+<li>The Pragmatic Engineer Podcast</li>
<li>Cup o&#39; Go [Golang]</li>
+<li>Backend Banter</li>
+<li>Maintainable</li>
<li>The ProdCast (Google SRE Podcast)</li>
-<li>Fallthrough [Golang]</li>
-<li>The Changelog Podcast(s)</li>
-<li>The Pragmatic Engineer Podcast</li>
-<li>Fork Around And Find Out</li>
+<li>Deep Questions with Cal Newport</li>
+<li>Hidden Brain</li>
+<li>Dev Interrupted</li>
</ul><br />
<h3 style='display: inline' id='podcasts-i-liked'>Podcasts I liked</h3><br />
<br />
@@ -213,37 +213,37 @@
<br />
<ul>
<li>FLOSS weekly</li>
-<li>Go Time (predecessor of fallthrough)</li>
-<li>Modern Mentor</li>
-<li>Ship It (predecessor of Fork Around And Find Out)</li>
<li>CRE: Chaosradio Express [german]</li>
<li>Java Pub House</li>
+<li>Ship It (predecessor of Fork Around And Find Out)</li>
+<li>Go Time (predecessor of fallthrough)</li>
+<li>Modern Mentor</li>
</ul><br />
<h2 style='display: inline' id='newsletters-i-like'>Newsletters I like</h2><br />
<br />
<span>This is a mix of tech and non-tech newsletters I am subscribed to. In random order:</span><br />
<br />
<ul>
+<li>Changelog News</li>
+<li>Register Spill</li>
+<li>Ruby Weekly</li>
<li>Monospace Mentor</li>
-<li>VK Newsletter</li>
-<li>Applied Go Weekly Newsletter</li>
+<li>Golang Weekly</li>
<li>The Valuable Dev</li>
+<li>VK Newsletter</li>
<li>The Imperfectionist</li>
+<li>byteSizeGo</li>
<li>The Pragmatic Engineer</li>
-<li>Golang Weekly</li>
-<li>Changelog News</li>
+<li>Applied Go Weekly Newsletter</li>
<li>Andreas Brandhorst Newsletter (Sci-Fi author)</li>
-<li>Register Spill</li>
-<li>Ruby Weekly</li>
-<li>byteSizeGo</li>
</ul><br />
<h2 style='display: inline' id='magazines-i-liked'>Magazines I like(d)</h2><br />
<br />
<span>This is a mix of tech I like(d). I may not be a current subscriber, but now and then, I buy an issue. In random order:</span><br />
<br />
<ul>
-<li>freeX (not published anymore)</li>
<li>Linux Magazine</li>
+<li>freeX (not published anymore)</li>
<li>LWN (online only)</li>
<li>Linux User</li>
</ul><br />
diff --git a/gemfeed/2025-08-05-local-coding-llm-with-ollama.html b/gemfeed/2025-08-05-local-coding-llm-with-ollama.html
index 1bccab43..008315b4 100644
--- a/gemfeed/2025-08-05-local-coding-llm-with-ollama.html
+++ b/gemfeed/2025-08-05-local-coding-llm-with-ollama.html
@@ -82,14 +82,16 @@
<span>The model I&#39;ll be mainly using in this blog post (<span class='inlinecode'>qwen2.5-coder:14b-instruct</span>) is particularly interesting as:</span><br />
<br />
<ul>
-<li><span class='inlinecode'>instruct</span>: Indicates this is the instruction-tuned variant of QWE, optimised for diverse tasks including coding</li>
+<li><span class='inlinecode'>instruct</span>: Indicates this is the instruction-tuned variant, optimised for diverse tasks including coding</li>
<li><span class='inlinecode'>coder</span>: Tells me that this model was trained on a mix of code and text data, making it especially effective for programming assistance</li>
</ul><br />
+<a class='textlink' href='https://ollama.com/library/qwen2.5-coder'>https://ollama.com/library/qwen2.5-coder</a><br />
<a class='textlink' href='https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct'>https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct</a><br />
<br />
-<span>For general thinking tasks, I found <span class='inlinecode'>deepseek-r1:14b</span> to be useful. For instance, I utilised <span class='inlinecode'>deepseek-r1:14b</span> to format this blog post and correct some English errors, demonstrating its effectiveness in natural language processing tasks. Additionally, it has proven invaluable for adding context and enhancing clarity in technical explanations, all while running locally on the MacBook Pro. Admittedly, it was a lot slower than "just using ChatGPT", but still within minutes. </span><br />
+<span>For general thinking tasks, I found <span class='inlinecode'>deepseek-r1:14b</span> to be useful (in the future, I also want to try other <span class='inlinecode'>qwen</span> models here). For instance, I utilised <span class='inlinecode'>deepseek-r1:14b</span> to format this blog post and correct some English errors, demonstrating its effectiveness in natural language processing tasks. Additionally, it has proven invaluable for adding context and enhancing clarity in technical explanations, all while running locally on the MacBook Pro. Admittedly, it was a lot slower than "just using ChatGPT", but still within a minute or so. </span><br />
<br />
<a class='textlink' href='https://ollama.com/library/deepseek-r1:14b'>https://ollama.com/library/deepseek-r1:14b</a><br />
+<a class='textlink' href='https://huggingface.co/deepseek-ai/DeepSeek-R1'>https://huggingface.co/deepseek-ai/DeepSeek-R1</a><br />
<br />
<span>A quantised (as mentioned above) LLM which has been converted from high-precision connection (typically 16- or 32-bit floating point) representations to lower-precision formats, such as 8-bit integers. This reduces the overall memory footprint of the model, making it significantly smaller and enabling it to run more efficiently on hardware with limited resources or to allow higher throughput on GPUs and CPUs. The benefits of quantisation include reduced storage and faster inference times due to simpler computations and better memory bandwidth utilisation. However, quantisation can introduce a drop in model accuracy because the lower numerical precision means the model cannot represent parameter values as precisely. In some cases, it may lead to instability or unexpected outputs in specific tasks or edge cases.</span><br />
<br />
@@ -441,6 +443,8 @@ content = "{CODE}"
<br />
<span>As you can see, I have also added other models, such as Mistral Nemo and DeepSeek R1, so that I can switch between them in Helix. Other than that, the completion parameters are interesting. They define how the LLM should interact with the text in the text editor based on the given examples.</span><br />
<br />
+<span>If you want to see more <span class='inlinecode'>lsp-ai</span> configuration examples, they are some for Vim and Helix in the <span class='inlinecode'>lsp-ai</span> git repository!</span><br />
+<br />
<h3 style='display: inline' id='code-completion-in-action'>Code completion in action</h3><br />
<br />
<span>The screenshot shows how Ollama&#39;s <span class='inlinecode'>qwen2.5-coder</span> model provides code completion suggestions within the Helix editor. The LSP auto-completion is triggered by typing <span class='inlinecode'>&lt;CURSOR&gt;</span> in the code snippet, and Ollama responds with relevant completions based on the context.</span><br />
@@ -451,15 +455,15 @@ content = "{CODE}"
<br />
<span>I found GitHub Copilot to be still faster than <span class='inlinecode'>qwen2.5-coder:14b</span>, but the local LLM one is actually workable for me already. And, as mentioned earlier, things will likely improve in the future regarding local LLMs. So I am excited about the future of local LLMs and coding tools like Ollama and Helix.</span><br />
<br />
-<span>After trying <span class='inlinecode'>qwen3-coder:30b-a3b-q4_K_M</span> (following the publication of this blog post), I found it to be significantly faster and more capable than the previous model, making it a promising option for local coding tasks. Experimentation reveals that even current local setups are surprisingly effective for routine coding tasks, offering a glimpse into the future of on-machine AI assistance.</span><br />
+<span class='quote'>After trying <span class='inlinecode'>qwen3-coder:30b-a3b-q4_K_M</span> (following the publication of this blog post), I found it to be significantly faster and more capable than the previous model, making it a promising option for local coding tasks. Experimentation reveals that even current local setups are surprisingly effective for routine coding tasks, offering a glimpse into the future of on-machine AI assistance.</span><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
-<span>Will there ever be a time we can run larger models (60B, 100B, ...and larger) on consumer hardware, or even on our phones? We are not quite there yet, but I am optimistic that we will see significant improvements in the next few years. As hardware capabilities improve and/or become cheaper, and more efficient models are developed, the landscape of local AI coding assistants will continue to evolve. </span><br />
+<span>Will there ever be a time we can run larger models (60B, 100B, ...and larger) on consumer hardware, or even on our phones? We are not quite there yet, but I am optimistic that we will see improvements in the next few years. As hardware capabilities improve and/or become cheaper, and more efficient models are developed (or new techniques will be invented to make language models more effective), the landscape of local AI coding assistants will continue to evolve. </span><br />
<br />
<span>For now, even the models listed in this blog post are very promising already, and they run on consumer-grade hardware (at least in the realm of the initial tests I&#39;ve performed... the ones in this blog post are overly simplistic, though! But they were good for getting started with Ollama and initial demonstration)! I will continue experimenting with Ollama and other local LLMs to see how they can enhance my coding experience. I may cancel my Copilot subscription, which I currently use only for in-editor auto-completion, at some point.</span><br />
<br />
-<span>However, truth be told, I don&#39;t think the setup described in this blog post currently matches the performance of commercial models like Claude Code (Sonnet 4, Opus 4), Gemini 2.5 Pro, and others. Maybe we could get close if we had the high-end hardware needed to run the largest Qwen Coder model available. But, as mentioned already, that is out of reach for occasional coders like me. Furthermore, I want to continue coding manually to some degree, as otherwise I will start to forget how to write for-loops, which can be awkward... However, do we always need the best model when AI can help generate boilerplate or repetitive tasks even with smaller models?</span><br />
+<span>However, truth be told, I don&#39;t think the setup described in this blog post currently matches the performance of commercial models like Claude Code (Sonnet 4, Opus 4), Gemini 2.5 Pro, the OpenAI models and others. Maybe we could get close if we had the high-end hardware needed to run the largest Qwen Coder model available. But, as mentioned already, that is out of reach for occasional coders like me. Furthermore, I want to continue coding manually to some degree, as otherwise I will start to forget how to write for-loops, which can be awkward... However, do we always need the best model when AI can help generate boilerplate or repetitive tasks even with smaller models?</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml
index 8fc709ae..1903efe6 100644
--- a/gemfeed/atom.xml
+++ b/gemfeed/atom.xml
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
- <updated>2025-08-04T17:48:22+03:00</updated>
+ <updated>2025-08-05T09:54:29+03:00</updated>
<title>foo.zone feed</title>
<subtitle>To be in the .zone!</subtitle>
<link href="https://foo.zone/gemfeed/atom.xml" rel="self" />
@@ -89,14 +89,16 @@
<span>The model I&#39;ll be mainly using in this blog post (<span class='inlinecode'>qwen2.5-coder:14b-instruct</span>) is particularly interesting as:</span><br />
<br />
<ul>
-<li><span class='inlinecode'>instruct</span>: Indicates this is the instruction-tuned variant of QWE, optimised for diverse tasks including coding</li>
+<li><span class='inlinecode'>instruct</span>: Indicates this is the instruction-tuned variant, optimised for diverse tasks including coding</li>
<li><span class='inlinecode'>coder</span>: Tells me that this model was trained on a mix of code and text data, making it especially effective for programming assistance</li>
</ul><br />
+<a class='textlink' href='https://ollama.com/library/qwen2.5-coder'>https://ollama.com/library/qwen2.5-coder</a><br />
<a class='textlink' href='https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct'>https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct</a><br />
<br />
-<span>For general thinking tasks, I found <span class='inlinecode'>deepseek-r1:14b</span> to be useful. For instance, I utilised <span class='inlinecode'>deepseek-r1:14b</span> to format this blog post and correct some English errors, demonstrating its effectiveness in natural language processing tasks. Additionally, it has proven invaluable for adding context and enhancing clarity in technical explanations, all while running locally on the MacBook Pro. Admittedly, it was a lot slower than "just using ChatGPT", but still within minutes. </span><br />
+<span>For general thinking tasks, I found <span class='inlinecode'>deepseek-r1:14b</span> to be useful (in the future, I also want to try other <span class='inlinecode'>qwen</span> models here). For instance, I utilised <span class='inlinecode'>deepseek-r1:14b</span> to format this blog post and correct some English errors, demonstrating its effectiveness in natural language processing tasks. Additionally, it has proven invaluable for adding context and enhancing clarity in technical explanations, all while running locally on the MacBook Pro. Admittedly, it was a lot slower than "just using ChatGPT", but still within a minute or so. </span><br />
<br />
<a class='textlink' href='https://ollama.com/library/deepseek-r1:14b'>https://ollama.com/library/deepseek-r1:14b</a><br />
+<a class='textlink' href='https://huggingface.co/deepseek-ai/DeepSeek-R1'>https://huggingface.co/deepseek-ai/DeepSeek-R1</a><br />
<br />
<span>A quantised (as mentioned above) LLM which has been converted from high-precision connection (typically 16- or 32-bit floating point) representations to lower-precision formats, such as 8-bit integers. This reduces the overall memory footprint of the model, making it significantly smaller and enabling it to run more efficiently on hardware with limited resources or to allow higher throughput on GPUs and CPUs. The benefits of quantisation include reduced storage and faster inference times due to simpler computations and better memory bandwidth utilisation. However, quantisation can introduce a drop in model accuracy because the lower numerical precision means the model cannot represent parameter values as precisely. In some cases, it may lead to instability or unexpected outputs in specific tasks or edge cases.</span><br />
<br />
@@ -448,6 +450,8 @@ content = "{CODE}"
<br />
<span>As you can see, I have also added other models, such as Mistral Nemo and DeepSeek R1, so that I can switch between them in Helix. Other than that, the completion parameters are interesting. They define how the LLM should interact with the text in the text editor based on the given examples.</span><br />
<br />
+<span>If you want to see more <span class='inlinecode'>lsp-ai</span> configuration examples, they are some for Vim and Helix in the <span class='inlinecode'>lsp-ai</span> git repository!</span><br />
+<br />
<h3 style='display: inline' id='code-completion-in-action'>Code completion in action</h3><br />
<br />
<span>The screenshot shows how Ollama&#39;s <span class='inlinecode'>qwen2.5-coder</span> model provides code completion suggestions within the Helix editor. The LSP auto-completion is triggered by typing <span class='inlinecode'>&lt;CURSOR&gt;</span> in the code snippet, and Ollama responds with relevant completions based on the context.</span><br />
@@ -458,15 +462,15 @@ content = "{CODE}"
<br />
<span>I found GitHub Copilot to be still faster than <span class='inlinecode'>qwen2.5-coder:14b</span>, but the local LLM one is actually workable for me already. And, as mentioned earlier, things will likely improve in the future regarding local LLMs. So I am excited about the future of local LLMs and coding tools like Ollama and Helix.</span><br />
<br />
-<span>After trying <span class='inlinecode'>qwen3-coder:30b-a3b-q4_K_M</span> (following the publication of this blog post), I found it to be significantly faster and more capable than the previous model, making it a promising option for local coding tasks. Experimentation reveals that even current local setups are surprisingly effective for routine coding tasks, offering a glimpse into the future of on-machine AI assistance.</span><br />
+<span class='quote'>After trying <span class='inlinecode'>qwen3-coder:30b-a3b-q4_K_M</span> (following the publication of this blog post), I found it to be significantly faster and more capable than the previous model, making it a promising option for local coding tasks. Experimentation reveals that even current local setups are surprisingly effective for routine coding tasks, offering a glimpse into the future of on-machine AI assistance.</span><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
-<span>Will there ever be a time we can run larger models (60B, 100B, ...and larger) on consumer hardware, or even on our phones? We are not quite there yet, but I am optimistic that we will see significant improvements in the next few years. As hardware capabilities improve and/or become cheaper, and more efficient models are developed, the landscape of local AI coding assistants will continue to evolve. </span><br />
+<span>Will there ever be a time we can run larger models (60B, 100B, ...and larger) on consumer hardware, or even on our phones? We are not quite there yet, but I am optimistic that we will see improvements in the next few years. As hardware capabilities improve and/or become cheaper, and more efficient models are developed (or new techniques will be invented to make language models more effective), the landscape of local AI coding assistants will continue to evolve. </span><br />
<br />
<span>For now, even the models listed in this blog post are very promising already, and they run on consumer-grade hardware (at least in the realm of the initial tests I&#39;ve performed... the ones in this blog post are overly simplistic, though! But they were good for getting started with Ollama and initial demonstration)! I will continue experimenting with Ollama and other local LLMs to see how they can enhance my coding experience. I may cancel my Copilot subscription, which I currently use only for in-editor auto-completion, at some point.</span><br />
<br />
-<span>However, truth be told, I don&#39;t think the setup described in this blog post currently matches the performance of commercial models like Claude Code (Sonnet 4, Opus 4), Gemini 2.5 Pro, and others. Maybe we could get close if we had the high-end hardware needed to run the largest Qwen Coder model available. But, as mentioned already, that is out of reach for occasional coders like me. Furthermore, I want to continue coding manually to some degree, as otherwise I will start to forget how to write for-loops, which can be awkward... However, do we always need the best model when AI can help generate boilerplate or repetitive tasks even with smaller models?</span><br />
+<span>However, truth be told, I don&#39;t think the setup described in this blog post currently matches the performance of commercial models like Claude Code (Sonnet 4, Opus 4), Gemini 2.5 Pro, the OpenAI models and others. Maybe we could get close if we had the high-end hardware needed to run the largest Qwen Coder model available. But, as mentioned already, that is out of reach for occasional coders like me. Furthermore, I want to continue coding manually to some degree, as otherwise I will start to forget how to write for-loops, which can be awkward... However, do we always need the best model when AI can help generate boilerplate or repetitive tasks even with smaller models?</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
diff --git a/index.html b/index.html
index f0731f29..10c7b70b 100644
--- a/index.html
+++ b/index.html
@@ -13,7 +13,7 @@
</p>
<h1 style='display: inline' id='hello'>Hello!</h1><br />
<br />
-<span class='quote'>This site was generated at 2025-08-04T17:48:22+03:00 by <span class='inlinecode'>Gemtexter</span></span><br />
+<span class='quote'>This site was generated at 2025-08-05T09:54:29+03:00 by <span class='inlinecode'>Gemtexter</span></span><br />
<br />
<span>Welcome to the foo.zone!</span><br />
<br />
diff --git a/uptime-stats.html b/uptime-stats.html
index cfe14a58..2e73db0e 100644
--- a/uptime-stats.html
+++ b/uptime-stats.html
@@ -13,7 +13,7 @@
</p>
<h1 style='display: inline' id='my-machine-uptime-stats'>My machine uptime stats</h1><br />
<br />
-<span class='quote'>This site was last updated at 2025-08-04T17:48:22+03:00</span><br />
+<span class='quote'>This site was last updated at 2025-08-05T09:54:29+03:00</span><br />
<br />
<span>The following stats were collected via <span class='inlinecode'>uptimed</span> on all of my personal computers over many years and the output was generated by <span class='inlinecode'>guprecords</span>, the global uptime records stats analyser of mine.</span><br />
<br />
@@ -45,15 +45,15 @@
| 9. | pluto | 51 | Linux 3.2.0-4-amd64 |
| 10. | mega15289 | 50 | Darwin 23.4.0 |
| 11. | *mega-m3-pro | 50 | Darwin 24.5.0 |
-| 12. | *fishfinger | 43 | OpenBSD 7.6 |
-| 13. | *t450 | 43 | FreeBSD 14.2-RELEASE |
+| 12. | *t450 | 43 | FreeBSD 14.2-RELEASE |
+| 13. | *fishfinger | 43 | OpenBSD 7.6 |
| 14. | mega8477 | 40 | Darwin 13.4.0 |
| 15. | phobos | 40 | Linux 3.4.0-CM-g1dd7cdf |
| 16. | *blowfish | 38 | OpenBSD 7.6 |
| 17. | sun | 33 | FreeBSD 10.3-RELEASE-p24 |
| 18. | f2 | 25 | FreeBSD 14.2-RELEASE-p1 |
-| 19. | f1 | 20 | FreeBSD 14.2-RELEASE-p1 |
-| 20. | moon | 20 | FreeBSD 14.0-RELEASE-p3 |
+| 19. | moon | 20 | FreeBSD 14.0-RELEASE-p3 |
+| 20. | f1 | 20 | FreeBSD 14.2-RELEASE-p1 |
+-----+----------------+-------+------------------------------+
</pre>
<br />
@@ -68,7 +68,7 @@
| 1. | vulcan | 4 years, 5 months, 6 days | Linux 3.10.0-1160.81.1.el7.x86_64 |
| 2. | sun | 3 years, 9 months, 26 days | FreeBSD 10.3-RELEASE-p24 |
| 3. | uranus | 3 years, 9 months, 5 days | NetBSD 10.1 |
-| 4. | *earth | 3 years, 7 months, 23 days | Linux 6.15.7-200.fc42.x86_64 |
+| 4. | *earth | 3 years, 7 months, 24 days | Linux 6.15.7-200.fc42.x86_64 |
| 5. | *blowfish | 3 years, 5 months, 16 days | OpenBSD 7.6 |
| 6. | uugrn | 3 years, 5 months, 5 days | FreeBSD 11.2-RELEASE-p4 |
| 7. | deltavega | 3 years, 1 months, 21 days | Linux 3.10.0-1160.11.1.el7.x86_64 |
@@ -163,7 +163,7 @@
| 3. | alphacentauri | 6 years, 9 months, 13 days | FreeBSD 11.4-RELEASE-p7 |
| 4. | vulcan | 4 years, 5 months, 6 days | Linux 3.10.0-1160.81.1.el7.x86_64 |
| 5. | makemake | 4 years, 4 months, 7 days | Linux 6.9.9-200.fc40.x86_64 |
-| 6. | *earth | 4 years, 1 months, 9 days | Linux 6.15.7-200.fc42.x86_64 |
+| 6. | *earth | 4 years, 1 months, 10 days | Linux 6.15.7-200.fc42.x86_64 |
| 7. | sun | 3 years, 10 months, 2 days | FreeBSD 10.3-RELEASE-p24 |
| 8. | *blowfish | 3 years, 5 months, 17 days | OpenBSD 7.6 |
| 9. | uugrn | 3 years, 5 months, 5 days | FreeBSD 11.2-RELEASE-p4 |
@@ -207,8 +207,8 @@
| 16. | Darwin 15... | 15 |
| 17. | Darwin 22... | 12 |
| 18. | Darwin 18... | 11 |
-| 19. | FreeBSD 7... | 10 |
-| 20. | FreeBSD 6... | 10 |
+| 19. | FreeBSD 6... | 10 |
+| 20. | FreeBSD 7... | 10 |
+-----+----------------+-------+
</pre>
<br />
@@ -269,8 +269,8 @@
| 16. | Darwin 18... | 32 |
| 17. | Darwin 22... | 30 |
| 18. | Darwin 15... | 29 |
-| 19. | FreeBSD 5... | 25 |
-| 20. | FreeBSD 13... | 25 |
+| 19. | FreeBSD 13... | 25 |
+| 20. | FreeBSD 5... | 25 |
+-----+----------------+-------+
</pre>
<br />
@@ -298,7 +298,7 @@
+-----+------------+------------------------------+
| Pos | KernelName | Uptime |
+-----+------------+------------------------------+
-| 1. | *Linux | 27 years, 11 months, 11 days |
+| 1. | *Linux | 27 years, 11 months, 12 days |
| 2. | *FreeBSD | 11 years, 5 months, 3 days |
| 3. | *OpenBSD | 7 years, 5 months, 5 days |
| 4. | *Darwin | 4 years, 10 months, 21 days |