diff options
Diffstat (limited to 'gemfeed')
| -rw-r--r-- | gemfeed/2025-06-22-task-samurai.html | 2 | ||||
| -rw-r--r-- | gemfeed/2025-08-05-local-coding-llm-with-ollama.html | 22 | ||||
| -rw-r--r-- | gemfeed/atom.xml | 26 |
3 files changed, 45 insertions, 5 deletions
diff --git a/gemfeed/2025-06-22-task-samurai.html b/gemfeed/2025-06-22-task-samurai.html index c6a9f151..b3579c2a 100644 --- a/gemfeed/2025-06-22-task-samurai.html +++ b/gemfeed/2025-06-22-task-samurai.html @@ -145,7 +145,7 @@ <br /> <span>Other related posts are:</span><br /> <br /> -<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama</a><br /> +<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama on macOS</a><br /> <a class='textlink' href='./2025-06-22-task-samurai.html'>2025-06-22 Task Samurai: An agentic coding learning experiment (You are currently reading this)</a><br /> <br /> <a class='textlink' href='../'>Back to the main site</a><br /> diff --git a/gemfeed/2025-08-05-local-coding-llm-with-ollama.html b/gemfeed/2025-08-05-local-coding-llm-with-ollama.html index b47dda81..1bccab43 100644 --- a/gemfeed/2025-08-05-local-coding-llm-with-ollama.html +++ b/gemfeed/2025-08-05-local-coding-llm-with-ollama.html @@ -27,6 +27,26 @@ /_| |_\_________________/ quantised \ </pre> <br /> +<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> +<br /> +<ul> +<li><a href='#local-llm-for-coding-with-ollama-on-macos'>Local LLM for Coding with Ollama on macOS</a></li> +<li>⇢ <a href='#why-local-llms'>Why Local LLMs?</a></li> +<li>⇢ <a href='#hardware-considerations'>Hardware Considerations</a></li> +<li>⇢ <a href='#basic-setup-and-manual-code-prompting'>Basic Setup and Manual Code Prompting</a></li> +<li>⇢ ⇢ <a href='#installing-ollama-and-a-model'>Installing Ollama and a Model</a></li> +<li>⇢ ⇢ <a href='#example-usage'>Example Usage</a></li> +<li>⇢ <a href='#agentic-coding-with-aider'>Agentic Coding with Aider</a></li> +<li>⇢ ⇢ <a href='#installation'>Installation</a></li> +<li>⇢ ⇢ <a href='#agentic-coding-prompt'>Agentic coding prompt</a></li> +<li>⇢ ⇢ <a href='#compilation--execution'>Compilation & Execution</a></li> +<li>⇢ ⇢ <a href='#the-code'>The code</a></li> +<li>⇢ <a href='#in-editor-code-completion'>In-Editor Code Completion</a></li> +<li>⇢ ⇢ <a href='#installation-of-lsp-ai'>Installation of <span class='inlinecode'>lsp-ai</span></a></li> +<li>⇢ ⇢ <a href='#helix-configuration'>Helix Configuration</a></li> +<li>⇢ ⇢ <a href='#code-completion-in-action'>Code completion in action</a></li> +<li>⇢ <a href='#conclusion'>Conclusion</a></li> +</ul><br /> <span>With all the AI buzz around coding assistants, and being a bit concerned about being dependent on third-party cloud providers here, I decided to explore the capabilities of local large language models (LLMs) using Ollama. </span><br /> <br /> <span>Ollama is a powerful tool that brings local AI capabilities directly to your local hardware. By running AI models locally, you can enjoy the benefits of intelligent assistance without relying on cloud services. This document outlines my initial setup and experiences with Ollama, with a focus on coding tasks and agentic coding.</span><br /> @@ -445,7 +465,7 @@ content = "{CODE}" <br /> <span>Other related posts are:</span><br /> <br /> -<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama (You are currently reading this)</a><br /> +<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama on macOS (You are currently reading this)</a><br /> <a class='textlink' href='./2025-06-22-task-samurai.html'>2025-06-22 Task Samurai: An agentic coding learning experiment</a><br /> <br /> <a class='textlink' href='../'>Back to the main site</a><br /> diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml index efebdeca..8fc709ae 100644 --- a/gemfeed/atom.xml +++ b/gemfeed/atom.xml @@ -1,6 +1,6 @@ <?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom"> - <updated>2025-08-04T17:23:03+03:00</updated> + <updated>2025-08-04T17:48:22+03:00</updated> <title>foo.zone feed</title> <subtitle>To be in the .zone!</subtitle> <link href="https://foo.zone/gemfeed/atom.xml" rel="self" /> @@ -34,6 +34,26 @@ /_| |_\_________________/ quantised \ </pre> <br /> +<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> +<br /> +<ul> +<li><a href='#local-llm-for-coding-with-ollama-on-macos'>Local LLM for Coding with Ollama on macOS</a></li> +<li>⇢ <a href='#why-local-llms'>Why Local LLMs?</a></li> +<li>⇢ <a href='#hardware-considerations'>Hardware Considerations</a></li> +<li>⇢ <a href='#basic-setup-and-manual-code-prompting'>Basic Setup and Manual Code Prompting</a></li> +<li>⇢ ⇢ <a href='#installing-ollama-and-a-model'>Installing Ollama and a Model</a></li> +<li>⇢ ⇢ <a href='#example-usage'>Example Usage</a></li> +<li>⇢ <a href='#agentic-coding-with-aider'>Agentic Coding with Aider</a></li> +<li>⇢ ⇢ <a href='#installation'>Installation</a></li> +<li>⇢ ⇢ <a href='#agentic-coding-prompt'>Agentic coding prompt</a></li> +<li>⇢ ⇢ <a href='#compilation--execution'>Compilation & Execution</a></li> +<li>⇢ ⇢ <a href='#the-code'>The code</a></li> +<li>⇢ <a href='#in-editor-code-completion'>In-Editor Code Completion</a></li> +<li>⇢ ⇢ <a href='#installation-of-lsp-ai'>Installation of <span class='inlinecode'>lsp-ai</span></a></li> +<li>⇢ ⇢ <a href='#helix-configuration'>Helix Configuration</a></li> +<li>⇢ ⇢ <a href='#code-completion-in-action'>Code completion in action</a></li> +<li>⇢ <a href='#conclusion'>Conclusion</a></li> +</ul><br /> <span>With all the AI buzz around coding assistants, and being a bit concerned about being dependent on third-party cloud providers here, I decided to explore the capabilities of local large language models (LLMs) using Ollama. </span><br /> <br /> <span>Ollama is a powerful tool that brings local AI capabilities directly to your local hardware. By running AI models locally, you can enjoy the benefits of intelligent assistance without relying on cloud services. This document outlines my initial setup and experiences with Ollama, with a focus on coding tasks and agentic coding.</span><br /> @@ -452,7 +472,7 @@ content = "{CODE}" <br /> <span>Other related posts are:</span><br /> <br /> -<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama (You are currently reading this)</a><br /> +<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama on macOS (You are currently reading this)</a><br /> <a class='textlink' href='./2025-06-22-task-samurai.html'>2025-06-22 Task Samurai: An agentic coding learning experiment</a><br /> <br /> <a class='textlink' href='../'>Back to the main site</a><br /> @@ -3203,7 +3223,7 @@ Jul <font color="#000000">06</font> <font color="#000000">10</font>:<font color= <br /> <span>Other related posts are:</span><br /> <br /> -<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama</a><br /> +<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama on macOS</a><br /> <a class='textlink' href='./2025-06-22-task-samurai.html'>2025-06-22 Task Samurai: An agentic coding learning experiment (You are currently reading this)</a><br /> <br /> <a class='textlink' href='../'>Back to the main site</a><br /> |
