29 lines
1.6 KiB
Markdown
29 lines
1.6 KiB
Markdown
# 2026-02-28 Session Notes
|
|
|
|
## OpenCode LLM Provider Implementation Plan
|
|
|
|
Nicholai reviewed and approved a comprehensive plan to add OpenCode as a third extraction LLM provider for the Signet memory pipeline. Previously, only Ollama (HTTP API) and Claude Code (CLI subprocess) were available.
|
|
|
|
### Design Overview
|
|
|
|
OpenCode v1.2.15 (installed at `/home/nicholai/.opencode/bin/opencode`) can run as a headless HTTP server and access any model it supports (Anthropic, Google, OpenAI, OpenRouter, local Ollama) through a unified REST API. The implementation will auto-start the OpenCode server on port 4096 when the daemon initializes, with automatic fallback to Ollama if unavailable.
|
|
|
|
### Implementation Scope
|
|
|
|
Four files require modification:
|
|
1. `packages/core/src/types.ts` — add `"opencode"` to provider union
|
|
2. `packages/daemon/src/pipeline/provider.ts` — create `createOpenCodeProvider()` factory
|
|
3. `packages/daemon/src/daemon.ts` — wire provider selection and fallback logic
|
|
4. `packages/daemon/src/memory-config.ts` — recognize opencode in config resolution
|
|
|
|
### Key Technical Decisions
|
|
|
|
- Default model: `anthropic/claude-haiku-4-5-20251001`
|
|
- Default timeout: 60s
|
|
- Server management: daemon will spawn `opencode serve --port 4096` as a child process, tracking it for cleanup on shutdown
|
|
- Session reuse: single session created on first call, reused for all subsequent calls
|
|
- Response handling: SSE (server-sent events) collected from `/sessions/{id}/message` endpoint until completion
|
|
|
|
### Next Steps
|
|
|
|
Implementation work scheduled. First task: start OpenCode server and inspect actual response format at `/doc` endpoint to confirm SSE parsing approach. |