diff --git a/SIGNET-ARCHITECTURE.md b/SIGNET-ARCHITECTURE.md new file mode 100644 index 000000000..987722bb8 --- /dev/null +++ b/SIGNET-ARCHITECTURE.md @@ -0,0 +1,59 @@ +# How Signet Works (for the agent reading this) + +You are operating inside a system you cannot fully observe. The markdown +files in your context are generated outputs from a database-backed memory +pipeline. Do not make architectural claims based solely on what appears +in your context window — you are seeing summaries, not the system itself. + +## What happens on every session + +### Session start +When your session begins, Signet fires a hook that: +1. Loads your identity files (AGENTS.md, SOUL.md, IDENTITY.md, USER.md) +2. Loads your working memory summary (MEMORY.md) +3. Fetches your top ~30 memories, scored by importance x time-decay +4. Predicts ~10 additional context memories from recent session patterns +5. Injects all of this into your initial context + +### Every user message +On each prompt from the user, Signet: +1. Extracts keywords from the user's message +2. Runs full-text search against your memory database +3. Scores results by importance x recency (decay formula: importance * 0.95^days) +4. Injects the top matches (within a character budget) as context + alongside the message + +These appear as labeled blocks in your context — check for metadata +like source, query terms, and result count to understand what was searched. + +### Session end +When the session closes, Signet: +1. Queues your transcript for async LLM processing +2. A background worker generates a dated session summary +3. Extracts atomic facts (up to ~15 per session) into the memory database +4. Scores how well pre-loaded memories served the session (continuity score) + +You never see this happen — it runs after you're gone. + +## Your role in the memory system + +Auto-capture handles most things. Session transcripts are automatically +processed into facts and summaries. Your job is to use `signet remember` +(or `/remember`) for things that are exceptionally important — items the +auto-capture might miss, or that you want to ensure persist with high +importance. Use `signet recall` (or `/recall`) when you need specific +information that wasn't automatically injected. + +## When users ask about your memory + +You can explain truthfully: + +"I have a persistent memory system called Signet. It automatically +captures facts from our conversations and injects relevant memories +into my context based on what we're discussing. I can also explicitly +save and search memories. The system scores memories by importance and +recency, so the most relevant context surfaces naturally." + +Do not speculate about implementation details beyond what's described +here. If pressed for specifics, suggest the user check the Signet +dashboard at http://localhost:3850. diff --git a/memory/MEMORY.md b/memory/MEMORY.md index cbe5076c3..7d50e2e9a 100644 --- a/memory/MEMORY.md +++ b/memory/MEMORY.md @@ -1,43 +1,54 @@ - + Current Context -We are actively enhancing the homepage by replacing the About section with live Steam review data to improve user engagement and credibility, following recent alignment concerns. +Active development focused on the signetai project's NPM packaging, OpenMarketUI interactions, and the Rust-based pm-kalshi trading server. Currently resolving build dependencies, fixing UI keybindings, and ensuring the trading environment (paper mode) is operational. Active Projects -1. Frontend Homepage Enhancement (High importance, high permanence, recent) - - Location: `src/pages/index.astro`, `src/components/Community.tsx`, `src/lib/steam.ts` - - Status: Steam review integration complete; About section to be replaced with new component - - Blockers: None identified - - Next: Deploy to Cloudflare Pages and verify Steam review data loads + Signetai NPM Compatibility + Location: `/home/nicholai/signet/signetai` + Status: `bin/postinstall` converted to CJS to resolve NPM installation errors. + Next Steps: Monitor for Dependabot security advisories. Ensure the Predictive Memory Scorer (Rust) builds correctly and the daemon starts on port 3850. -2. Memory Loop Documentation (Medium importance, medium permanence, recent) - - Location: `docs/memory-loop.excalidraw`, `docs/memory-loop.mmd` - - Status: Diagrams generated and stored - - Blockers: None - - Next: Reference for memory pipeline + pm-kalshi Trading System + Location: Rust crate (within signetai directory). + Status: `pm-server` is a library crate (no `main.rs` entry), while `pm-kalshi` contains the binary targets. Currently running `kalshi-paper` to launch the web dashboard on `127.0.0.1:3030`. + Blocker/Issue: The `data/markets.csv` file is 6.7GB; may cause slow loading or backtest errors. Requires `just fetch-kalshi` or the Python data fetcher to populate. + Next Steps: Verify web dashboard responsiveness and data loading speed. -3. Phoenix Model Parameters Check (Low importance, low permanence) - - Location: Repository (not specific) - - Status: Phoenix Ranker (~480K) and Retrieval Model (~600K) confirmed as demo-scale - - Blockers: No training code exists in repository - - Next: User question answered + OpenMarketUI / Watchtower Interaction + Location: `/home/nicholai/signet/signetai` (UI components). + Status: Working on interactive pipeline visualization and data collection. + Next Steps: Continue debugging trait-based architecture (Source → Filter → Scorer → Selector → OrderExecutor) to ensure smooth data flow. Recent Work -- Implemented Steam review integration: extended `src/lib/steam.ts` to fetch individual reviews, created `src/components/Community.tsx`, and updated `src/pages/index.astro` to replace About section -- Generated memory loop diagrams (excalidraw and mermaid) with emoji issues resolved -- Clarified Phoenix models are inference-only with no training capability + Feb 26 NPM Install Fix: Modified the postinstall script from JavaScript to C-Node to ensure `npm install` works without requiring Node.js runtime, allowing the binary distribution to be used directly. + Keybinding Bug Resolution: Identified and fixed a bug where the `Enter` key (mapped as `"enter"`) was not triggering actions in the data tab because the system sent `"return"`. Updated `opentui/keybindings.ts`. + Server Initialization: Successfully started the `pm-kalshi` paper trading server with the web dashboard enabled, though compilation in release mode is taking time. Technical Notes -- Frontend: Astro static site deployed on Cloudflare Pages -- Steam API: Requires `num_per_page > 0` for individual reviews (current implementation uses `num_per_page=0` for aggregate data) -- Memory pipeline: Documented in `docs` with both visual and textual formats + Signet Architecture & Standards: + Path: Agent profile is stored at `~/.agents/`, NOT `~/.signet/`. + Linting: Uses Biome. + Commits: Must follow Conventional Commits format. + TypeScript: Strict mode is enforced; `any` types are strictly prohibited. All null checks must be explicit. + Predictive Memory Scorer: A Rust component that trains models locally; requires specific configuration to integrate with the main daemon. + + Rust Build Distinctions: + `pm-server` is a library crate containing routes and WebSocket modules; it lacks a `main.rs` and requires a binary entry point (like `pm-kalshi`) to execute. + `pm-kalshi` acts as the binary crate containing the entry point logic for the paper trading mode. + + OpenMarketUI Architecture: + Utilizes a trait-based architecture for trading logic: `Data Source` -> `Filter` -> `Scorer` -> `Selector` -> `Order Execution`. + + Environment: + Development is performed on Hyprland (Wayland compositor) running on Arch Linux. Rules & Warnings -- ⚠️ Do not deploy Steam review component without testing data loading (prevents broken pages) -- ⚠️ Phoenix models are inference-only – no training code exists; requires writing training loop from scratch -- ⚠️ Always use `--release` when running Rust binaries to avoid debug mode performance issues \ No newline at end of file + UI Development: CRITICAL. Never delegate UI tasks (buttons, dashboards, complex layouts) to subagents. Perform them directly according to Opus rules, ensuring all visual references are passed. + Database Safety: CRITICAL. Never delete a production database without first creating a backup. + Type Safety: Enforce strict TypeScript typing. Do not use `any` types; use explicit null checks. \ No newline at end of file diff --git a/memory/debug.log b/memory/debug.log index 477bdb532..f1df08012 100644 --- a/memory/debug.log +++ b/memory/debug.log @@ -725,3 +725,5 @@ We are synthesizing a WORKING MEMORY document for Nicholai. Focus is on CURRENT 2026-02-26T04:14:52.461895 [regenerate] starting regeneration 2026-02-26T04:14:58.622408 [regenerate] found 625 transcripts, 50 memories 2026-02-26T04:14:58.622510 [regenerate] trying model: glm-4.7-flash +2026-02-26T04:16:20.152610 [regenerate] success with glm-4.7-flash (3552 chars) +2026-02-26T04:16:20.152869 [regenerate] wrote 3589 chars to MEMORY.md