summaryrefslogtreecommitdiff
path: root/hyperstack-vm2.toml
AgeCommit message (Collapse)Author
2026-03-24gpt-oss-120b: enable reasoning via openai_gptoss parserPaul Buetow
- Add --reasoning-parser openai_gptoss to gpt-oss-120b vLLM config in all three toml files; extracts <|channel|>analysis thinking blocks into reasoning_content in API responses - Mark gpt-oss-120b as reasoning: true in pi/agent/models.json for all three providers (hyperstack, hyperstack1, hyperstack2) - Update vm1 state file Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-21Fix Nemotron OOM; add VM lifecycle fish abbrs; document automated setupPaul Buetow
- hyperstack-vm1/vm2.toml: reduce nemotron-super max_model_len 262144→131072 and add --enforce-eager to disable CUDA graph capture (~3-4 GB overhead). Nemotron 120B weights (~60 GB) leave too little VRAM headroom for KV cache allocation and CUDA graph buffers at 262K context on a single A100 80GB. 131K context with eager mode is stable. README VRAM table updated to match. - hyperstack.fish: add hyperstack-create/delete/test and hyperstack-create/delete-both abbreviations for VM lifecycle management alongside the existing pi-* aliases. - README.md: add "Automated setup reference" section with single-VM and two-VM command flows before the manual vLLM Docker setup section. End-to-end tested: single VM (GPT-OSS 120B), dual VM (Nemotron + Qwen3-Coder), pi queries on all three models — all passed. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-21Remove LiteLLM and Claude Code repo references (task 301)Paul Buetow
2026-03-21initial importPaul Buetow