diff options
| author | Paul Buetow <paul@buetow.org> | 2025-08-20 09:07:28 +0300 |
|---|---|---|
| committer | Paul Buetow <paul@buetow.org> | 2025-08-20 09:07:28 +0300 |
| commit | 4d2437727fba2166b807686ad5c6427982aa01b9 (patch) | |
| tree | be12824e86b48c9ca9acda02f3bdcf8547b7f5a1 | |
| parent | 86b730fa12d93b689af4d01d4925c8053b50e74b (diff) | |
chore: bump version to v0.2.0; docs: split config/usage and update in-editor chatv0.2.0
| -rw-r--r-- | README.md | 199 | ||||
| -rw-r--r-- | docs/configuration.md | 112 | ||||
| -rw-r--r-- | docs/usage-examples.md | 117 | ||||
| -rw-r--r-- | internal/version.go | 2 |
4 files changed, 245 insertions, 185 deletions
@@ -10,204 +10,35 @@ Hexai exposes a simple LLM provider interface. It supports OpenAI, GitHub Copilo ## Configuration -### Example configuration file - -- Location: `$XDG_CONFIG_HOME/hexai/config.json` (usually `~/.config/hexai/config.json`) -- Example: - -``` -{ - "max_tokens": 4000, - "context_mode": "always-full", - "context_window_lines": 120, - "max_context_tokens": 4000, - "log_preview_limit": 100, - "no_disk_io": true, - "trigger_characters": [".", ":", "/", "_", " " ], - "coding_temperature": 0.2, - "provider": "ollama", - "copilot_model": "gpt-4o-mini", - "copilot_base_url": "https://api.githubcopilot.com", - "copilot_temperature": 0.2, - "openai_model": "gpt-4.1", - "openai_base_url": "https://api.openai.com/v1", - "openai_temperature": 0.2, - "ollama_model": "qwen3-coder:30b-a3b-q4_K_M", - "ollama_base_url": "http://localhost:11434", - "ollama_temperature": 0.2 -} -``` - -* context_mode: minimal | window | file-on-new-func | always-full -* provider: openai | copilot | ollama -* coding_temperature: single knob for LSP requests (optional; default uses provider temperature) -* openai_model, openai_base_url, openai_temperature: OpenAI-only options -* copilot_model, copilot_base_url, copilot_temperature: Copilot-only options -* ollama_model, ollama_base_url, ollama_temperature: Ollama-only options - -Ensure `HEXAI_OPENAI_API_KEY` (or `OPENAI_API_KEY`) or `COPILOT_API_KEY` is set in your environment according to your chosen provider. - -### Environment overrides - -- All config-file options can be overridden by environment variables prefixed with `HEXAI_`. -- Env values take precedence over `config.json`. -- Examples: - - `HEXAI_PROVIDER`, `HEXAI_MAX_TOKENS`, `HEXAI_CONTEXT_MODE`, `HEXAI_CONTEXT_WINDOW_LINES`, `HEXAI_MAX_CONTEXT_TOKENS`, `HEXAI_LOG_PREVIEW_LIMIT` - - `HEXAI_CODING_TEMPERATURE` - - `HEXAI_TRIGGER_CHARACTERS` (comma-separated, e.g. `".,:,_ , "`) - - `HEXAI_OPENAI_MODEL`, `HEXAI_OPENAI_BASE_URL`, `HEXAI_OPENAI_TEMPERATURE` - - `HEXAI_COPILOT_MODEL`, `HEXAI_COPILOT_BASE_URL`, `HEXAI_COPILOT_TEMPERATURE` - - `HEXAI_OLLAMA_MODEL`, `HEXAI_OLLAMA_BASE_URL`, `HEXAI_OLLAMA_TEMPERATURE` -- API keys: - - OpenAI: prefer `HEXAI_OPENAI_API_KEY`, falling back to `OPENAI_API_KEY`. - - Copilot: prefer `HEXAI_COPILOT_API_KEY`, falling back to `COPILOT_API_KEY`. - -### Selecting a provider - -- Set `provider` in the config file to `openai`, `copilot`, or `ollama`. -- If omitted, Hexai defaults to `openai`. - -### OpenAI configuration - -- Required: `HEXAI_OPENAI_API_KEY` (or `OPENAI_API_KEY`) — provided via environment variable only. -- In config file: - - `openai_model` — model name (default: `gpt-4.1`). - - `openai_base_url` — API base (default: `https://api.openai.com/v1`). - - `openai_temperature` — default temperature (coding-friendly default `0.2`). - -### GitHub Copilot configuration - -- Required: `COPILOT_API_KEY` — provided via environment variable only. -- In config file: - - `copilot_model` — model name (default: `gpt-4o-mini`). - - `copilot_base_url` — API base (default: `https://api.githubcopilot.com`). - - `copilot_temperature` — default temperature (coding-friendly default `0.2`). - -### Ollama configuration (local) - -- In config file: - - `ollama_model` — model name/tag (default: `qwen3-coder:30b-a3b-q4_K_M`). - - `ollama_base_url` — base URL to Ollama (default: `http://localhost:11434`). - - `ollama_temperature` — default temperature (coding-friendly default `0.2`). - -### Temperature behavior - -* What it is: Temperature controls how random/creative the model's word choices are. - Lower values (≈0–0.3) are more deterministic and precise; higher values (≈0.7+) - produce more diverse, creative outputs. -* Default for coding: When not specified in the config, Hexai uses a - coding-friendly default temperature of `0.2` for all providers. -* Per-provider override: Set `openai_temperature`, `copilot_temperature`, or - `ollama_temperature` to override. Valid ranges depend on the provider, but - typically `0.0`–`2.0`. -* LSP vs CLI: The LSP sometimes overrides temperature for specific actions - using `coding_temperature` (if set). If `coding_temperature` is not set, - LSP calls use the provider default temperature. The CLI uses the configured - provider default unless you change it. - -Recommended ranges and use cases: - -- 0.0–0.3: Deterministic, precise, minimal tangents. Best for code - refactoring, bug fixes, tests, and data extraction. -- 0.4–0.7: Balanced creativity and coherence. General Q&A and most writing. -- 0.8–1.2+: Highly creative/varied. Brainstorming, fiction, or ad copy; may - increase risk of off-target or verbose outputs. - -Guidance: - -- Lower temperature increases consistency and predictability, but can repeat - or be terse. -- Higher temperature increases diversity of phrasing and ideas, but can wander - or introduce mistakes. - -Notes: -- For Ollama, ensure the model is available locally (e.g., `ollama pull qwen3-coder:30b-a3b-q4_K_M`). -- If you run Ollama in OpenAI‑compatible mode, you may alternatively use the - OpenAI provider with `openai_base_url` in the config pointing to your local endpoint. +See the full configuration guide in `docs/configuration.md`. ## Usage -### Hexai LSP Server +### Hexai LSP server -- Run LSP server over stdio: - - `hexai-lsp` - -- LSP flags (minimal): - - `-version`: print the Hexai version and exit. - - `-log`: path to log file (optional; default `/tmp/hexai-lsp.log`). +- Run over stdio: `hexai-lsp` +- Flags: `-version`, `-log` + +More in `docs/usage-examples.md`. ### Configure in Helix - -In Helix' `~/.config/helix/languages.toml`, configure for example the following: - -```toml -[[language]] -name = "go" -auto-format= true -diagnostic-severity = "hint" -formatter = { command = "goimports" } -language-servers = [ "gopls", "golangci-lint-lsp", "hexai" ] - -[language-server.hexai] -command = "hexai-lsp" -``` - -Note, that we have also configured other LSPs here (for Go, `gopls` and `golangci-lint-lsp`, along with `hexai` for AI completions - they aren't required for `hexai` to work, though) -## Inline triggers +See `docs/usage-examples.md#configure-in-helix` for a sample `languages.toml` snippet. -Hexai LSP supports inline trigger tags you can type in your code to request an -action from the LLM and then clean up the tag automatically. +## In-editor chat and inline features -- ``: Do what is written in `some prompt text here`, then remove just the prompt. - - Strict form: no space after the first ``. - - An optional single space immediately after the closing `;` is also removed. -- Spaced variants such as `; text ; spaced ;` are ignored. +- In-editor chat: ask inline by ending a line with `..`, `??`, `!!`, `::`, or `;;`. Hexai inserts + a `>`-prefixed answer below. See `docs/usage-examples.md#in-editor-chat`. +- Inline triggers: strict `;text;` instructions for selection-based actions. See + `docs/usage-examples.md#inline-triggers`. ## Code actions -Hexai provides code actions that operate only on the current selection in Helix: - -- Rewrite selection: Hexai looks for the first instruction inside the selection - and rewrites the selection accordingly. -- Resolve diagnostics: With a selection active, Hexai gathers only diagnostics - that overlap your selection and fixes them by editing only the selected code. - Diagnostics outside the selection are not modified. - -Instruction sources (first one found wins): - -- Strict marker: `` (no space after first ``). -- Line comments: `// text`, `# text`, `-- text`. -- Single-line block comments: `/* text */`, `<!-- text -->`. +Overview and details in `docs/usage-examples.md#code-actions`. ## Hexai CLI tool -- Run command-line tool (processes text via configured LLM): - - `cat SOMEFILE.txt | hexai` - - `hexai 'some prompt text here'` - - `cat SOMEFILE.txt | hexai 'some prompt text here'` (stdin and arg are concatenated) - -- Default style: concise answers. - - If the prompt asks for commands, outputs only the commands with no commentary. - - Add the word `explain` in your prompt to request a verbose explanation. -- Exit codes: `0` success, `1` provider/config error, `2` no input. - -Examples: - -``` -# From stdin only -cat SOMEFILE.txt | hexai - -# From arg only -hexai 'summarize: list 3 bullets' - -# From both (stdin first, then arg) -cat SOMEFILE.txt | hexai 'explain the tradeoffs' - -# Commands-only output (no explanation) -hexai 'install ripgrep on macOS' +See `docs/usage-examples.md#cli-usage` and `docs/usage-examples.md#examples` for examples. -# Verbose explanation -hexai 'install ripgrep on macOS and explain' -``` +<!-- In-editor chat example moved to docs/usage-examples.md#in-editor-chat --> diff --git a/docs/configuration.md b/docs/configuration.md new file mode 100644 index 0000000..6e430d1 --- /dev/null +++ b/docs/configuration.md @@ -0,0 +1,112 @@ +# Hexai configuration + +This document covers all configuration options for Hexai, including the config file, +environment overrides, provider selection, and temperature behavior. + +## Config file + +- Location: `$XDG_CONFIG_HOME/hexai/config.json` (usually `~/.config/hexai/config.json`). +- Example: + +```json +{ + "max_tokens": 4000, + "context_mode": "always-full", + "context_window_lines": 120, + "max_context_tokens": 4000, + "log_preview_limit": 100, + "no_disk_io": true, + "trigger_characters": [".", ":", "/", "_", " " ], + "coding_temperature": 0.2, + "provider": "ollama", + "copilot_model": "gpt-4o-mini", + "copilot_base_url": "https://api.githubcopilot.com", + "copilot_temperature": 0.2, + "openai_model": "gpt-4.1", + "openai_base_url": "https://api.openai.com/v1", + "openai_temperature": 0.2, + "ollama_model": "qwen3-coder:30b-a3b-q4_K_M", + "ollama_base_url": "http://localhost:11434", + "ollama_temperature": 0.2 +} +``` + +Key fields: + +- max_tokens: upper bound for a single LLM response. +- context_mode: `minimal` | `window` | `file-on-new-func` | `always-full`. +- context_window_lines: line count for `window` mode. +- max_context_tokens: hard cap for sent context tokens. +- log_preview_limit: max characters of context preview logged. +- no_disk_io: avoid reading files from disk when building context. +- trigger_characters: LSP completion trigger characters. +- coding_temperature: optional override for LSP calls. +- provider: `openai` | `copilot` | `ollama`. + +## Environment overrides + +- All config-file options can be overridden by environment variables prefixed with `HEXAI_`. +- Env values take precedence over `config.json`. +- Examples: + - `HEXAI_PROVIDER`, `HEXAI_MAX_TOKENS`, `HEXAI_CONTEXT_MODE`, `HEXAI_CONTEXT_WINDOW_LINES`, `HEXAI_MAX_CONTEXT_TOKENS`, `HEXAI_LOG_PREVIEW_LIMIT` + - `HEXAI_CODING_TEMPERATURE` + - `HEXAI_TRIGGER_CHARACTERS` (comma-separated, e.g., `".,:,_ , "`) + - `HEXAI_OPENAI_MODEL`, `HEXAI_OPENAI_BASE_URL`, `HEXAI_OPENAI_TEMPERATURE` + - `HEXAI_COPILOT_MODEL`, `HEXAI_COPILOT_BASE_URL`, `HEXAI_COPILOT_TEMPERATURE` + - `HEXAI_OLLAMA_MODEL`, `HEXAI_OLLAMA_BASE_URL`, `HEXAI_OLLAMA_TEMPERATURE` + +API keys: + +- OpenAI: prefer `HEXAI_OPENAI_API_KEY`, falling back to `OPENAI_API_KEY`. +- Copilot: prefer `HEXAI_COPILOT_API_KEY`, falling back to `COPILOT_API_KEY`. + +## Selecting a provider + +- Set `provider` in the config to `openai`, `copilot`, or `ollama`. +- If omitted, Hexai defaults to `openai`. + +## OpenAI configuration + +- Required: `HEXAI_OPENAI_API_KEY` (or `OPENAI_API_KEY`). +- Options: + - `openai_model` — model name (default: `gpt-4.1`). + - `openai_base_url` — API base (default: `https://api.openai.com/v1`). + - `openai_temperature` — default temperature (coding-friendly `0.2`). + +## GitHub Copilot configuration + +- Required: `COPILOT_API_KEY`. +- Options: + - `copilot_model` — model name (default: `gpt-4o-mini`). + - `copilot_base_url` — API base (default: `https://api.githubcopilot.com`). + - `copilot_temperature` — default temperature (coding-friendly `0.2`). + +## Ollama configuration + +- Options: + - `ollama_model` — model name/tag (default: `qwen3-coder:30b-a3b-q4_K_M`). + - `ollama_base_url` — base URL (default: `http://localhost:11434`). + - `ollama_temperature` — default temperature (coding-friendly `0.2`). + +Notes: + +- Ensure the model is available locally (e.g., `ollama pull qwen3-coder:30b-a3b-q4_K_M`). +- Alternatively, run Ollama in OpenAI‑compatible mode and use the OpenAI provider with + `openai_base_url` pointed at your local endpoint. + +## Temperature behavior + +- What it is: controls randomness/creativity of outputs. +- Default for coding: `0.2` for all providers unless overridden. +- Per-provider overrides: `openai_temperature`, `copilot_temperature`, `ollama_temperature`. + +Recommended ranges: + +- 0.0–0.3: deterministic and precise; best for refactors, tests, and bug fixes. +- 0.4–0.7: balanced; general Q&A and writing. +- 0.8–1.2+: creative; brainstorming; may increase tangents. + +Guidance: + +- Lower temperature increases consistency, but can be terse or repetitive. +- Higher temperature increases diversity, but can wander or introduce mistakes. diff --git a/docs/usage-examples.md b/docs/usage-examples.md new file mode 100644 index 0000000..e4f6d09 --- /dev/null +++ b/docs/usage-examples.md @@ -0,0 +1,117 @@ +# Hexai usage and examples + +This document describes how to run the LSP server, configure Helix, use in-editor chat, +inline triggers, code actions, and the CLI — with examples. + +## Table of contents + +- LSP server +- Configure in Helix +- In-editor chat +- Inline triggers +- Code actions +- CLI usage +- Examples + +## LSP server + +- Run over stdio: `hexai-lsp` +- Flags: + - `-version`: print Hexai version and exit. + - `-log`: path to log file (default `/tmp/hexai-lsp.log`). + +## Configure in Helix + +In `~/.config/helix/languages.toml`: + +```toml +[[language]] +name = "go" +auto-format = true +diagnostic-severity = "hint" +formatter = { command = "goimports" } +language-servers = [ "gopls", "golangci-lint-lsp", "hexai" ] + +[language-server.hexai] +command = "hexai-lsp" +``` + +Note: additional LSPs (`gopls`, `golangci-lint-lsp`) are optional; Hexai works without them. + +## In-editor chat + +Ask a question at the end of a line and receive the answer inline. + +- End your question line with a trigger: `..`, `??`, `!!`, `::`, or `;;`. +- Hexai removes the trailing marker (last char for `..`/`??`/`!!`/`::`, both for `;;`). +- It inserts a blank line, then a reply line prefixed with `> `, then one extra newline so most + editors place the cursor on a fresh blank line after the answer. +- If a `>` reply already exists below the question, Hexai won’t answer again. + +Example: + +```text +What is a slice in Go?? + +> A slice is a dynamically-sized, flexible view into the elements of an array. It references +> an underlying array and tracks length/capacity; most Go code uses slices instead of arrays. + +``` + +Context: Hexai includes up to the three most recent Q/A pairs above the question when asking the +LLM, so follow-ups remain on topic (e.g., “Are there many tourists?” after a location answer). + +## Inline triggers + +Hexai supports inline prompt tags you can type in code to request an action from the LLM and then +auto-clean the tag. The strict semicolon form is supported: + +- `;do something;` — Hexai uses the text between semicolons as the instruction and removes only the + prompt. Strict form requires no space after the first `;` and no space before the closing `;`. + +Spaced variants (e.g., `; spaced ;`) are ignored. + +## Code actions + +Operate on the current selection in Helix: + +- Rewrite selection: finds the first instruction inside the selection and rewrites accordingly. +- Resolve diagnostics: gathers only diagnostics overlapping the selection and fixes them by editing + the selected code; diagnostics outside the selection are not changed. + +Instruction sources (first match wins): + +- Strict marker: `;text;` (no space after first `;`). +- Line comments: `// text`, `# text`, `-- text`. +- Single-line block comments: `/* text */`, `<!-- text -->`. + +## CLI usage + +Process text via the configured LLM: + +- `cat SOMEFILE.txt | hexai` +- `hexai 'some prompt text here'` +- `cat SOMEFILE.txt | hexai 'some prompt text here'` (stdin and arg are concatenated) + +Defaults: concise answers. If the prompt asks for commands, Hexai outputs only commands. Add the +word `explain` to request a verbose explanation. Exit codes: `0` success, `1` provider/config error, +`2` no input. + +## Examples + +```sh +# From stdin only +cat SOMEFILE.txt | hexai + +# From arg only +hexai 'summarize: list 3 bullets' + +# From both (stdin first, then arg) +cat SOMEFILE.txt | hexai 'explain the tradeoffs' + +# Commands-only output (no explanation) +hexai 'install ripgrep on macOS' + +# Verbose explanation +hexai 'install ripgrep on macOS and explain' +``` diff --git a/internal/version.go b/internal/version.go index d702f0d..876cbc5 100644 --- a/internal/version.go +++ b/internal/version.go @@ -1,4 +1,4 @@ // Summary: Hexai semantic version identifier used by CLI and LSP binaries. package internal -const Version = "0.1.1" +const Version = "0.2.0" |
