diff options
Diffstat (limited to 'gemfeed/2025-08-05-local-coding-llm-with-ollama.md')
| -rw-r--r-- | gemfeed/2025-08-05-local-coding-llm-with-ollama.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/gemfeed/2025-08-05-local-coding-llm-with-ollama.md b/gemfeed/2025-08-05-local-coding-llm-with-ollama.md index bb8f11a5..f2c84834 100644 --- a/gemfeed/2025-08-05-local-coding-llm-with-ollama.md +++ b/gemfeed/2025-08-05-local-coding-llm-with-ollama.md @@ -407,7 +407,7 @@ In the LSP auto-completion, the one prefixed with `ai - ` was generated by `qwen I found GitHub Copilot to be still faster than `qwen2.5-coder:14b`, but the local LLM one is actually workable for me already. And, as mentioned earlier, things will likely improve in the future regarding local LLMs. So I am excited about the future of local LLMs and coding tools like Ollama and Helix. -> After trying `qwen3-coder:30b-a3b-q4_K_M` (following the publication of this blog post), I found it to be significantly faster and more capable than the previous model, making it a promising option for local coding tasks. Experimentation reveals that even current local setups are surprisingly effective for routine coding tasks, offering a glimpse into the future of on-machine AI assistance. +> After trying `qwen3-coder:30b-a3b-q4_K_M` (following the publication of this blog post), I found it to be significantly faster and more capable than the previous model, making it a promising option for local coding tasks. Honestly, even my current local setup already handles routine coding stuff pretty well—better than I expected. ## Conclusion |
