summaryrefslogtreecommitdiff
path: root/gemfeed
diff options
context:
space:
mode:
authorPaul Buetow <paul@buetow.org>2025-08-04 17:50:01 +0300
committerPaul Buetow <paul@buetow.org>2025-08-04 17:50:01 +0300
commit8051df0f59adf41038e7cb58323fb865132ebd24 (patch)
tree434aed14a37628514fe0d6f71be32cdbbcdaa903 /gemfeed
parentda06eb32175608e90d0c0093312e9aa254e325e2 (diff)
Update content for gemtext
Diffstat (limited to 'gemfeed')
-rw-r--r--gemfeed/2025-06-22-task-samurai.gmi2
-rw-r--r--gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi21
-rw-r--r--gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi.tpl2
-rw-r--r--gemfeed/atom.xml26
4 files changed, 46 insertions, 5 deletions
diff --git a/gemfeed/2025-06-22-task-samurai.gmi b/gemfeed/2025-06-22-task-samurai.gmi
index 7782a570..2ab9e3d0 100644
--- a/gemfeed/2025-06-22-task-samurai.gmi
+++ b/gemfeed/2025-06-22-task-samurai.gmi
@@ -126,7 +126,7 @@ E-Mail your comments to `paul@nospam.buetow.org` :-)
Other related posts are:
-=> ./2025-08-05-local-coding-llm-with-ollama.gmi 2025-08-05 Local LLM for Coding with Ollama
+=> ./2025-08-05-local-coding-llm-with-ollama.gmi 2025-08-05 Local LLM for Coding with Ollama on macOS
=> ./2025-06-22-task-samurai.gmi 2025-06-22 Task Samurai: An agentic coding learning experiment (You are currently reading this)
=> ../ Back to the main site
diff --git a/gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi b/gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi
index b838b764..b0a806de 100644
--- a/gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi
+++ b/gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi
@@ -14,6 +14,25 @@
/_| |_\_________________/ quantised \
```
+## Table of Contents
+
+* ⇢ Local LLM for Coding with Ollama on macOS
+* ⇢ ⇢ Why Local LLMs?
+* ⇢ ⇢ Hardware Considerations
+* ⇢ ⇢ Basic Setup and Manual Code Prompting
+* ⇢ ⇢ ⇢ Installing Ollama and a Model
+* ⇢ ⇢ ⇢ Example Usage
+* ⇢ ⇢ Agentic Coding with Aider
+* ⇢ ⇢ ⇢ Installation
+* ⇢ ⇢ ⇢ Agentic coding prompt
+* ⇢ ⇢ ⇢ Compilation & Execution
+* ⇢ ⇢ ⇢ The code
+* ⇢ ⇢ In-Editor Code Completion
+* ⇢ ⇢ ⇢ Installation of `lsp-ai`
+* ⇢ ⇢ ⇢ Helix Configuration
+* ⇢ ⇢ ⇢ Code completion in action
+* ⇢ ⇢ Conclusion
+
With all the AI buzz around coding assistants, and being a bit concerned about being dependent on third-party cloud providers here, I decided to explore the capabilities of local large language models (LLMs) using Ollama.
Ollama is a powerful tool that brings local AI capabilities directly to your local hardware. By running AI models locally, you can enjoy the benefits of intelligent assistance without relying on cloud services. This document outlines my initial setup and experiences with Ollama, with a focus on coding tasks and agentic coding.
@@ -401,7 +420,7 @@ E-Mail your comments to `paul@nospam.buetow.org` :-)
Other related posts are:
-=> ./2025-08-05-local-coding-llm-with-ollama.gmi 2025-08-05 Local LLM for Coding with Ollama (You are currently reading this)
+=> ./2025-08-05-local-coding-llm-with-ollama.gmi 2025-08-05 Local LLM for Coding with Ollama on macOS (You are currently reading this)
=> ./2025-06-22-task-samurai.gmi 2025-06-22 Task Samurai: An agentic coding learning experiment
=> ../ Back to the main site
diff --git a/gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi.tpl b/gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi.tpl
index 9b4815ed..cf6d1972 100644
--- a/gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi.tpl
+++ b/gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi.tpl
@@ -14,6 +14,8 @@
/_| |_\_________________/ quantised \
```
+<< template::inline::toc
+
With all the AI buzz around coding assistants, and being a bit concerned about being dependent on third-party cloud providers here, I decided to explore the capabilities of local large language models (LLMs) using Ollama.
Ollama is a powerful tool that brings local AI capabilities directly to your local hardware. By running AI models locally, you can enjoy the benefits of intelligent assistance without relying on cloud services. This document outlines my initial setup and experiences with Ollama, with a focus on coding tasks and agentic coding.
diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml
index 04327046..0a401857 100644
--- a/gemfeed/atom.xml
+++ b/gemfeed/atom.xml
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
- <updated>2025-08-04T17:23:03+03:00</updated>
+ <updated>2025-08-04T17:48:22+03:00</updated>
<title>foo.zone feed</title>
<subtitle>To be in the .zone!</subtitle>
<link href="gemini://foo.zone/gemfeed/atom.xml" rel="self" />
@@ -34,6 +34,26 @@
/_| |_\_________________/ quantised \
</pre>
<br />
+<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
+<br />
+<ul>
+<li><a href='#local-llm-for-coding-with-ollama-on-macos'>Local LLM for Coding with Ollama on macOS</a></li>
+<li>⇢ <a href='#why-local-llms'>Why Local LLMs?</a></li>
+<li>⇢ <a href='#hardware-considerations'>Hardware Considerations</a></li>
+<li>⇢ <a href='#basic-setup-and-manual-code-prompting'>Basic Setup and Manual Code Prompting</a></li>
+<li>⇢ ⇢ <a href='#installing-ollama-and-a-model'>Installing Ollama and a Model</a></li>
+<li>⇢ ⇢ <a href='#example-usage'>Example Usage</a></li>
+<li>⇢ <a href='#agentic-coding-with-aider'>Agentic Coding with Aider</a></li>
+<li>⇢ ⇢ <a href='#installation'>Installation</a></li>
+<li>⇢ ⇢ <a href='#agentic-coding-prompt'>Agentic coding prompt</a></li>
+<li>⇢ ⇢ <a href='#compilation--execution'>Compilation &amp; Execution</a></li>
+<li>⇢ ⇢ <a href='#the-code'>The code</a></li>
+<li>⇢ <a href='#in-editor-code-completion'>In-Editor Code Completion</a></li>
+<li>⇢ ⇢ <a href='#installation-of-lsp-ai'>Installation of <span class='inlinecode'>lsp-ai</span></a></li>
+<li>⇢ ⇢ <a href='#helix-configuration'>Helix Configuration</a></li>
+<li>⇢ ⇢ <a href='#code-completion-in-action'>Code completion in action</a></li>
+<li>⇢ <a href='#conclusion'>Conclusion</a></li>
+</ul><br />
<span>With all the AI buzz around coding assistants, and being a bit concerned about being dependent on third-party cloud providers here, I decided to explore the capabilities of local large language models (LLMs) using Ollama. </span><br />
<br />
<span>Ollama is a powerful tool that brings local AI capabilities directly to your local hardware. By running AI models locally, you can enjoy the benefits of intelligent assistance without relying on cloud services. This document outlines my initial setup and experiences with Ollama, with a focus on coding tasks and agentic coding.</span><br />
@@ -452,7 +472,7 @@ content = "{CODE}"
<br />
<span>Other related posts are:</span><br />
<br />
-<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama (You are currently reading this)</a><br />
+<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama on macOS (You are currently reading this)</a><br />
<a class='textlink' href='./2025-06-22-task-samurai.html'>2025-06-22 Task Samurai: An agentic coding learning experiment</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
@@ -3203,7 +3223,7 @@ Jul <font color="#000000">06</font> <font color="#000000">10</font>:<font color=
<br />
<span>Other related posts are:</span><br />
<br />
-<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama</a><br />
+<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama on macOS</a><br />
<a class='textlink' href='./2025-06-22-task-samurai.html'>2025-06-22 Task Samurai: An agentic coding learning experiment (You are currently reading this)</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />