summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorPaul Buetow <paul@buetow.org>2025-08-16 23:16:54 +0300
committerPaul Buetow <paul@buetow.org>2025-08-16 23:16:54 +0300
commit765eda955eb811d08d867ff4d3914fc6d60c22dd (patch)
treefdc87da6af9d86dbda2ea9ab08244e93fd167188 /README.md
parent1b01e35c34b953cbf51298f4650dc3215c382a4f (diff)
refactor(config): drop env-based config (except OPENAI_API_KEY)
- Switch to config-file-only; only OPENAI_API_KEY read from env.\n- llm: replace env autodetect with Config + NewFromConfig; add newOpenAI/newOllama.\n- lsp: NewServer now accepts injected llm.Client.\n- cli: remove env overrides; extend appConfig with provider-specific fields; build client from config + OPENAI_API_KEY.\n- docs: update README (config-only, defaults to OpenAI, minimal example); simplify flags table.\n- add config.json.example.\n- prompts: enforce ;text; (no spaces) and add ;;text; to remove entire line; tests added.
Diffstat (limited to 'README.md')
-rw-r--r--README.md63
1 files changed, 34 insertions, 29 deletions
diff --git a/README.md b/README.md
index 5b6dc9c..40102e4 100644
--- a/README.md
+++ b/README.md
@@ -4,38 +4,36 @@
Hexai, the AI LSP for the Helix editor.
-At the moment this project is only in the proof of concept phase.
+At the moment this project is only in the proof of PoC phase.
## LLM provider
Hexai exposes a simple LLM provider interface. It supports OpenAI and a local
-Ollama server. Provider selection and models are configured via environment
-variables.
+Ollama server. Provider selection and models are configured via a JSON
+configuration file.
### Selecting a provider
-- Set `HEXAI_LLM_PROVIDER` to `openai` or `ollama` to force a provider.
-- If not set, Hexai auto‑detects:
- - Uses OpenAI when `OPENAI_API_KEY` is present.
- - Uses Ollama when any `OLLAMA_*` variables are present.
- - Otherwise, Hexai falls back to a basic, local completion.
+- Set `provider` in the config file to `openai` or `ollama`.
+- If omitted, Hexai defaults to `openai`.
### OpenAI configuration
-- Required: `OPENAI_API_KEY` — your OpenAI API key.
-- Optional: `OPENAI_MODEL` — model name (default: `gpt-4o-mini`).
-- Optional: `OPENAI_BASE_URL` — override the API base (e.g., a compatible endpoint).
+- Required: `OPENAI_API_KEY` — provided via environment variable only.
+- In config file:
+ - `openai_model` — model name (default: `gpt-4o-mini`).
+ - `openai_base_url` — API base (default: `https://api.openai.com/v1`).
### Ollama configuration (local)
-- Optional: `OLLAMA_MODEL` — model name/tag (default: `qwen2.5-coder:latest`).
-- Optional: `OLLAMA_BASE_URL` or `OLLAMA_HOST` — base URL to Ollama
- (default: `http://localhost:11434`).
+- In config file:
+ - `ollama_model` — model name/tag (default: `qwen2.5-coder:latest`).
+ - `ollama_base_url` — base URL to Ollama (default: `http://localhost:11434`).
Notes:
- For Ollama, ensure the model is available locally (e.g., `ollama pull qwen2.5-coder:latest`).
- If you run Ollama in OpenAI‑compatible mode, you may alternatively use the
- OpenAI provider with `OPENAI_BASE_URL` pointing to your local endpoint.
+ OpenAI provider with `openai_base_url` in the config pointing to your local endpoint.
## CLI usage and configuration
@@ -52,12 +50,13 @@ Notes:
### Flags quick reference
-| Flag | Env override | Description |
-|-------------------------|----------------------------|----------------------------------------------------|
-| `-log` | — | Path to log file (optional). |
-| `-version` | — | Print version and exit. |
+| Flag | Description |
+|------------|--------------------------------------|
+| `-log` | Path to log file (optional). |
+| `-version` | Print version and exit. |
-Configuration is via JSON file and environment variables (env has precedence).
+Configuration is via a JSON file only. Environment variables are not used
+except for `OPENAI_API_KEY`.
### JSON config file
@@ -72,18 +71,24 @@ Configuration is via JSON file and environment variables (env has precedence).
"max_context_tokens": 4000,
"log_preview_limit": 100,
"no_disk_io": true,
- "provider": "ollama" // or "openai"
+ "provider": "ollama", // or "openai"
+ // OpenAI-only options
+ "openai_model": "gpt-4.1",
+ "openai_base_url": "https://api.openai.com/v1",
+ // Ollama-only options
+ "ollama_model": "qwen2.5-coder:latest",
+ "ollama_base_url": "http://localhost:11434"
}
```
-### Environment overrides (take precedence)
+Minimal config (defaults to OpenAI):
-- `HEXAI_MAX_TOKENS`, `HEXAI_CONTEXT_MODE`, `HEXAI_CONTEXT_WINDOW_LINES`, `HEXAI_MAX_CONTEXT_TOKENS`
-- `HEXAI_LOG_PREVIEW_LIMIT`, `HEXAI_NO_DISK_IO`
-- `HEXAI_LLM_PROVIDER` (forces provider)
+```
+{}
+```
+
+Ensure `OPENAI_API_KEY` is set in your environment.
-### Environment quick reference (providers)
+### Environment
-- `HEXAI_LLM_PROVIDER`: `openai` | `ollama` (optional; otherwise auto‑detect).
-- OpenAI: `OPENAI_API_KEY` (required), `OPENAI_MODEL`, `OPENAI_BASE_URL`.
-- Ollama: `OLLAMA_MODEL`, `OLLAMA_BASE_URL` or `OLLAMA_HOST`.
+- Only `OPENAI_API_KEY` is read from the environment when `provider` is `openai`.